Planning in complex environments requires an agent to efficiently query a world model to find a feasible sequence of actions from start to goal. Recent work has shown that Large Language Models (LLMs), with their rich prior knowledge and reasoning capabilities, can potentially help with planning by searching over promising states and adapting to feedback from the world. In this paper, we propose and study two fundamentally competing frameworks that leverage LLMs for query-efficient planning. The first uses LLMs as a heuristic within a search-based planner to select promising nodes to expand and propose promising actions. The second uses LLMs as a generative planner to propose an entire sequence of actions from start to goal, query a world model, and adapt based on feedback. We show that while both approaches improve upon comparable baselines, using an LLM as a generative planner results in significantly fewer interactions. Our key finding is that the LLM as a planner can more rapidly adapt its planning strategies based on immediate feedback than LLM as a heuristic. We present evaluations and ablations on Robotouille and PDDL planning benchmarks and discuss connections to existing theory on query-efficient planning algorithms. Code is available at https://github.com/portal-cornell/llms-for-planning
翻译:在复杂环境中的规划要求智能体高效查询世界模型,以找到从起始状态到目标状态的可行动作序列。近期研究表明,大型语言模型凭借其丰富的先验知识与推理能力,可通过搜索潜在状态并适应环境反馈来辅助规划。本文提出并研究了两种利用LLM实现高效查询规划的基本竞争框架:第一种将LLM作为启发式函数嵌入基于搜索的规划器,用于选择待扩展的潜力节点并生成候选动作;第二种将LLM作为生成式规划器,直接生成从起始到目标的完整动作序列,通过查询世界模型并根据反馈进行自适应调整。实验表明,虽然两种方法均优于可比基线,但将LLM作为生成式规划器能显著减少交互次数。核心发现是:作为规划器的LLM相较于作为启发式函数的LLM,能基于即时反馈更快速地调整规划策略。我们在Robotouille和PDDL规划基准上进行了评估与消融实验,并探讨了与现有高效查询规划算法理论的联系。代码发布于https://github.com/portal-cornell/llms-for-planning。