A key limitation of learned robot control policies is their inability to generalize outside their training data. Recent works on vision-language-action models (VLAs) have shown that the use of large, internet pre-trained vision-language models as the backbone of learned robot policies can substantially improve their robustness and generalization ability. Yet, one of the most exciting capabilities of large vision-language models in other domains is their ability to reason iteratively through complex problems. Can that same capability be brought into robotics to allow policies to improve performance by reasoning about a given task before acting? Naive use of "chain-of-thought" (CoT) style prompting is significantly less effective with standard VLAs because of the relatively simple training examples that are available to them. Additionally, purely semantic reasoning about sub-tasks, as is common in regular CoT, is insufficient for robot policies that need to ground their reasoning in sensory observations and the robot state. To this end, we introduce Embodied Chain-of-Thought Reasoning (ECoT) for VLAs, in which we train VLAs to perform multiple steps of reasoning about plans, sub-tasks, motions, and visually grounded features like object bounding boxes and end effector positions, before predicting the robot action. We design a scalable pipeline for generating synthetic training data for ECoT on large robot datasets. We demonstrate, that ECoT increases the absolute success rate of OpenVLA, the current strongest open-source VLA policy, by 28% across challenging generalization tasks, without any additional robot training data. Additionally, ECoT makes it easier for humans to interpret a policy's failures and correct its behavior using natural language.
翻译:学习型机器人控制策略的一个关键局限在于其无法泛化至训练数据之外。近期关于视觉-语言-动作模型的研究表明,采用基于互联网海量数据预训练的视觉-语言大模型作为学习型机器人策略的骨干网络,能显著提升其鲁棒性与泛化能力。然而,视觉-语言大模型在其他领域最引人瞩目的能力之一,正是其通过迭代推理处理复杂问题的特性。能否将这种能力引入机器人领域,使策略在执行任务前通过推理来提升性能?直接采用"思维链"式提示方法在标准视觉-语言-动作模型中效果有限,因为可供其使用的训练样本相对简单。此外,常规思维链中常见的纯语义子任务推理,对于需要将推理过程锚定于感知观测与机器人状态的策略而言并不充分。为此,我们提出面向视觉-语言-动作模型的具身思维链推理方法,通过训练模型在执行机器人动作预测前,对任务规划、子目标分解、运动轨迹以及物体边界框、末端执行器位置等视觉锚定特征进行多步推理。我们设计了一套可扩展的流水线,用于在大型机器人数据集上生成具身思维链推理的合成训练数据。实验表明,在未使用额外机器人训练数据的情况下,该方法将当前最强开源视觉-语言-动作模型策略OpenVLA在复杂泛化任务中的绝对成功率提升了28%。此外,具身思维链推理使人类能更直观地解析策略失败原因,并通过自然语言修正其行为。