We introduce iMotion-LLM: a Multimodal Large Language Models (LLMs) with trajectory prediction, tailored to guide interactive multi-agent scenarios. Different from conventional motion prediction approaches, iMotion-LLM capitalizes on textual instructions as key inputs for generating contextually relevant trajectories.By enriching the real-world driving scenarios in the Waymo Open Dataset with textual motion instructions, we created InstructWaymo. Leveraging this dataset, iMotion-LLM integrates a pretrained LLM, fine-tuned with LoRA, to translate scene features into the LLM input space. iMotion-LLM offers significant advantages over conventional motion prediction models. First, it can generate trajectories that align with the provided instructions if it is a feasible direction. Second, when given an infeasible direction, it can reject the instruction, thereby enhancing safety. These findings act as milestones in empowering autonomous navigation systems to interpret and predict the dynamics of multi-agent environments, laying the groundwork for future advancements in this field.
翻译:我们提出了iMotion-LLM:一种具备轨迹预测能力的多模态大语言模型(LLM),专为引导交互式多智能体场景而设计。与传统的运动预测方法不同,iMotion-LLM以文本指令作为关键输入,生成与上下文相关的轨迹。通过在Waymo开放数据集的真实驾驶场景中融入文本运动指令,我们创建了InstructWaymo数据集。基于该数据集,iMotion-LLM集成了预训练的大语言模型,并采用LoRA进行微调,将场景特征映射至LLM输入空间。相比传统运动预测模型,iMotion-LLM具有显著优势:首先,在指令方向可行时,它能生成与该指令一致的轨迹;其次,在指令方向不可行时,它能拒绝该指令,从而提升安全性。这些成果标志着自主导航系统在解释和预测多智能体环境动态方面的重要里程碑,为该领域的未来进展奠定了基础。