One of the key challenges in current Reinforcement Learning (RL)-based Automated Driving (AD) agents is achieving flexible, precise, and human-like behavior cost-effectively. This paper introduces an innovative approach utilizing Large Language Models (LLMs) to intuitively and effectively optimize RL reward functions in a human-centric way. We developed a framework where instructions and dynamic environment descriptions are input into the LLM. The LLM then utilizes this information to assist in generating rewards, thereby steering the behavior of RL agents towards patterns that more closely resemble human driving. The experimental results demonstrate that this approach not only makes RL agents more anthropomorphic but also reaches better performance. Additionally, various strategies for reward-proxy and reward-shaping are investigated, revealing the significant impact of prompt design on shaping an AD vehicle's behavior. These findings offer a promising direction for the development of more advanced and human-like automated driving systems. Our experimental data and source code can be found here.
翻译:当前基于强化学习(RL)的自动驾驶智能体面临的关键挑战之一,在于以经济高效的方式实现灵活、精确且类人化的行为。本文提出一种创新方法,利用大型语言模型(LLM)以人为中心、直观有效地优化RL奖励函数。我们构建了一个框架,将指令与动态环境描述输入至LLM,LLM据此生成奖励以引导RL智能体的行为模式更接近人类驾驶。实验结果表明,该方法不仅使RL智能体更具拟人化特征,还显著提升了性能。此外,我们研究了奖励代理(reward-proxy)与奖励塑形(reward-shaping)的多种策略,揭示了提示词设计对自动驾驶车辆行为塑造的关键影响。这些发现为开发更先进、更类人的自动驾驶系统提供了有前景的研究方向。实验数据与源代码可通过文末链接获取。