Vision-language models (VLMs) have been applied to robot task planning problems, where the robot receives a task in natural language and generates plans based on visual inputs. While current VLMs have demonstrated strong vision-language understanding capabilities, their performance is still far from being satisfactory in planning tasks. At the same time, although classical task planners, such as PDDL-based, are strong in planning for long-horizon tasks, they do not work well in open worlds where unforeseen situations are common. In this paper, we propose a novel task planning and execution framework, called DKPROMPT, which automates VLM prompting using domain knowledge in PDDL for classical planning in open worlds. Results from quantitative experiments show that DKPROMPT outperforms classical planning, pure VLM-based and a few other competitive baselines in task completion rate.
翻译:视觉语言模型(VLM)已被应用于机器人任务规划问题,其中机器人接收自然语言描述的任务,并根据视觉输入生成规划。虽然当前的VLM已展现出强大的视觉-语言理解能力,但其在规划任务中的表现仍远未令人满意。与此同时,尽管经典任务规划器(例如基于PDDL的方法)在长视野任务规划方面表现优异,但在意外情况频发的开放世界中则效果不佳。本文提出了一种新颖的任务规划与执行框架,称为DKPROMPT,该框架利用PDDL中的领域知识自动生成VLM提示,以实现开放世界中的经典规划。定量实验结果表明,DKPROMPT在任务完成率上优于经典规划、纯VLM方法以及其他若干竞争基线。