Enabling robots to explore and act in unfamiliar environments under ambiguous human instructions by interactively identifying task-relevant objects (e.g., identifying cups or beverages for "I'm thirsty") remains challenging for existing vision-language model (VLM)-based methods. This challenge stems from inefficient reasoning and the lack of environmental interaction, which hinder real-time task planning and execution. To address this, We propose Affordance-Aware Interactive Decision-Making and Execution for Ambiguous Instructions (AIDE), a dual-stream framework that integrates interactive exploration with vision-language reasoning, where Multi-Stage Inference (MSI) serves as the decision-making stream and Accelerated Decision-Making (ADM) as the execution stream, enabling zero-shot affordance analysis and interpretation of ambiguous instructions. Extensive experiments in simulation and real-world environments show that AIDE achieves the task planning success rate of over 80\% and more than 95\% accuracy in closed-loop continuous execution at 10 Hz, outperforming existing VLM-based methods in diverse open-world scenarios.
翻译:在模糊的人类指令下(例如通过识别“我渴了”相关的杯子或饮料),使机器人能够在陌生环境中探索并执行任务,这对现有基于视觉语言模型(VLM)的方法仍具挑战性。这一挑战源于低效的推理以及缺乏环境交互,从而阻碍了实时任务规划与执行。为解决此问题,我们提出了可负担性感知的模糊指令交互式决策与执行(AIDE),这是一个将交互式探索与视觉语言推理相结合的双流框架。其中,多阶段推理(MSI)作为决策流,加速决策(ADM)作为执行流,实现了对模糊指令的零样本可负担性分析与解释。在仿真和真实环境中的大量实验表明,AIDE在任务规划成功率上超过80%,在10 Hz的闭环连续执行中准确率超过95%,在多样化的开放世界场景中优于现有的基于VLM的方法。