Planning with LLMs is bottlenecked by token-by-token generation and repeated full forward passes, making multi-step lookahead and rollout-based search expensive in latency and compute. We propose EmbedPlan, which replaces autoregressive next-state generation with a lightweight transition model operating in a frozen language embedding space. EmbedPlan encodes natural language state and action descriptions into vectors, predicts the next-state embedding, and retrieves the next state by nearest-neighbor similarity, enabling fast planning computation without fine-tuning the encoder. We evaluate next-state prediction across nine classical planning domains using six evaluation protocols of increasing difficulty: interpolation, plan-variant, extrapolation, multi-domain, cross-domain, and leave-one-out. Results show near-perfect interpolation performance but a sharp degradation when generalization requires transfer to unseen problems or unseen domains; plan-variant evaluation indicates generalization to alternative plans rather than memorizing seen trajectories. Overall, frozen embeddings support within-domain dynamics learning after observing a domain's transitions, while transfer across domain boundaries remains a bottleneck.
翻译:基于大语言模型(LLM)的规划受限于逐令牌生成和重复的完整前向传播,使得多步前瞻和基于推演的搜索在延迟和计算成本上十分昂贵。我们提出了EmbedPlan方法,该方法使用在冻结的语言嵌入空间中运行的轻量级转移模型,替代了自回归的下一状态生成。EmbedPlan将自然语言的状态和动作描述编码为向量,预测下一状态嵌入,并通过最近邻相似度检索下一状态,从而无需微调编码器即可实现快速规划计算。我们在九个经典规划领域中使用六种难度递增的评估协议评估了下一状态预测:内插、计划变体、外推、多领域、跨领域和留一法。结果显示,内插性能接近完美,但当泛化需要迁移到未见问题或未见领域时,性能急剧下降;计划变体评估表明模型能够泛化到替代计划,而非仅仅记忆已见轨迹。总体而言,冻结嵌入在观察到一个领域的转移后支持领域内的动态学习,而跨领域边界的迁移仍然是瓶颈。