Solving complex, long-horizon robotic manipulation tasks requires a deep understanding of physical interactions, reasoning about their long-term consequences, and precise high-level planning. Vision-Language Models (VLMs) offer a general perceive-reason-act framework for this goal. However, previous approaches using reflective planning to guide VLMs in correcting actions encounter significant limitations. These methods rely on inefficient and often inaccurate implicit learning of state-values from noisy foresight predictions, evaluate only a single greedy future, and suffer from substantial inference latency. To address these limitations, we propose a novel test-time computation framework that decouples state evaluation from action generation. This provides a more direct and fine-grained supervisory signal for robust decision-making. Our method explicitly models the advantage of an action plan, quantified by its reduction in distance to the goal, and uses a scalable critic to estimate. To address the stochastic nature of single-trajectory evaluation, we employ beam search to explore multiple future paths and aggregate them during decoding to model their expected long-term returns, leading to more robust action generation. Additionally, we introduce a lightweight, confidence-based trigger that allows for early exit when direct predictions are reliable, invoking reflection only when necessary. Extensive experiments on diverse, unseen multi-stage robotic manipulation tasks demonstrate a 24.6% improvement in success rate over state-of-the-art baselines, while significantly reducing inference time by 56.5%.
翻译:解决复杂、长视野的机器人操作任务需要深入理解物理交互、推理其长期后果,并执行精确的高层规划。视觉-语言模型(VLMs)为实现这一目标提供了一个通用的感知-推理-行动框架。然而,先前利用反思式规划来引导VLMs纠正动作的方法存在显著局限。这些方法依赖于从嘈杂的前瞻预测中进行低效且通常不准确的隐式状态价值学习,仅评估单一贪婪的未来路径,并遭受显著的推理延迟。为应对这些局限,我们提出了一种新颖的测试时计算框架,将状态评估与动作生成解耦。这为鲁棒的决策提供了更直接且细粒度的监督信号。我们的方法显式地建模动作计划的优势(通过其减少到目标距离的程度来量化),并使用一个可扩展的评论家网络进行估计。针对单一路径评估的随机性,我们采用束搜索来探索多个未来路径,并在解码过程中聚合它们以建模其期望长期回报,从而实现更鲁棒的动作生成。此外,我们引入了一个轻量级的、基于置信度的触发机制,当直接预测可靠时允许提前退出,仅在必要时才调用反思过程。在多样化的、未见过的多阶段机器人操作任务上进行的大量实验表明,与最先进的基线方法相比,我们的方法在成功率上提升了24.6%,同时显著地将推理时间减少了56.5%。