Vision-Language-Action (VLA) models convert high-level language instructions into concrete, executable actions, a task that is especially challenging in open-world environments. We present Visual Foresight Planning (ForeAct), a general and efficient planner that guides a VLA step-by-step using imagined future observations and subtask descriptions. With an imagined future observation, the VLA can focus on visuo-motor inference rather than high-level semantic reasoning, leading to improved accuracy and generalization. Our planner comprises a highly efficient foresight image generation module that predicts a high-quality 640$\times$480 future observation from the current visual input and language instruction within only 0.33s on an H100 GPU, together with a vision-language model that reasons over the task and produces subtask descriptions for both the generator and the VLA. Importantly, state-of-the-art VLAs can integrate our planner seamlessly by simply augmenting their visual inputs, without any architectural modification. The foresight generator is pretrained on over 1 million multi-task, cross-embodiment episodes, enabling it to learn robust embodied dynamics. We evaluate our framework on a benchmark that consists of 11 diverse, multi-step real-world tasks. It achieves an average success rate of 87.4%, demonstrating a +40.9% absolute improvement over the $π_0$ baseline (46.5%) and a +30.3% absolute improvement over $π_0$ augmented with textual subtask guidance (57.1%).
翻译:视觉-语言-动作模型将高层级语言指令转化为具体可执行动作,这一任务在开放世界环境中尤为困难。本文提出视觉前瞻规划,这是一种通用且高效的规划器,通过想象未来观测与子任务描述逐步引导视觉-语言-动作模型执行任务。借助想象的未来观测,视觉-语言-动作模型可专注于视觉运动推理而非高层级语义推理,从而提升准确性与泛化能力。我们的规划器包含一个高效的前瞻图像生成模块——该模块仅需0.33秒即可在H100 GPU上根据当前视觉输入和语言指令预测出640×480的高质量未来观测,以及一个对任务进行推理并为生成器和视觉-语言-动作模型生成子任务描述的视觉语言模型。值得注意的是,现有最先进的视觉-语言-动作模型仅需扩展其视觉输入即可无缝集成本规划器,无需任何架构修改。前瞻生成器在超过100万条多任务跨具身交互轨迹上进行预训练,从而学习到鲁棒的具身动态特性。我们在包含11项多样化多步骤现实世界任务的基准测试中评估本框架,其平均成功率达到87.4%,相较于π_0基线(46.5%)实现+40.9%的绝对提升,相较于结合文本子任务引导的π_0增强版本(57.1%)实现+30.3%的绝对提升。