The ability to grasp objects in-the-wild from open-ended language instructions constitutes a fundamental challenge in robotics. An open-world grasping system should be able to combine high-level contextual with low-level physical-geometric reasoning in order to be applicable in arbitrary scenarios. Recent works exploit the web-scale knowledge inherent in large language models (LLMs) to plan and reason in robotic context, but rely on external vision and action models to ground such knowledge into the environment and parameterize actuation. This setup suffers from two major bottlenecks: a) the LLM's reasoning capacity is constrained by the quality of visual grounding, and b) LLMs do not contain low-level spatial understanding of the world, which is essential for grasping in contact-rich scenarios. In this work we demonstrate that modern vision-language models (VLMs) are capable of tackling such limitations, as they are implicitly grounded and can jointly reason about semantics and geometry. We propose OWG, an open-world grasping pipeline that combines VLMs with segmentation and grasp synthesis models to unlock grounded world understanding in three stages: open-ended referring segmentation, grounded grasp planning and grasp ranking via contact reasoning, all of which can be applied zero-shot via suitable visual prompting mechanisms. We conduct extensive evaluation in cluttered indoor scene datasets to showcase OWG's robustness in grounding from open-ended language, as well as open-world robotic grasping experiments in both simulation and hardware that demonstrate superior performance compared to previous supervised and zero-shot LLM-based methods.
翻译:从开放式语言指令中抓取现实世界物体的能力构成了机器人学中的一个基本挑战。一个开放世界抓取系统应能够结合高层上下文推理与低层物理几何推理,以适用于任意场景。近期研究利用大型语言模型(LLMs)中固有的网络规模知识进行机器人场景下的规划与推理,但依赖外部视觉和动作模型将此类知识接地到环境中并参数化驱动。这种设置存在两个主要瓶颈:a) LLM的推理能力受限于视觉接地的质量,b) LLMs缺乏对世界的低层空间理解,而这在接触丰富的抓取场景中至关重要。本工作中,我们证明现代视觉语言模型(VLMs)能够应对这些限制,因为它们具有隐式接地能力,并能联合推理语义与几何信息。我们提出OWG——一个开放世界抓取流程,它将VLMs与分割及抓取合成模型相结合,通过三个阶段实现接地化的世界理解:开放式指代分割、接地化抓取规划以及通过接触推理的抓取排序。所有阶段均可通过合适的视觉提示机制以零样本方式应用。我们在杂乱室内场景数据集中进行了广泛评估,以展示OWG在开放式语言接地方面的鲁棒性;同时在仿真和硬件平台上进行了开放世界机器人抓取实验,结果表明其性能优于以往基于监督学习和零样本LLM的方法。