Recent advances in vision-language models (VLMs) have led to improved performance on tasks such as visual question answering and image captioning. Consequently, these models are now well-positioned to reason about the physical world, particularly within domains such as robotic manipulation. However, current VLMs are limited in their understanding of the physical concepts (e.g., material, fragility) of common objects, which restricts their usefulness for robotic manipulation tasks that involve interaction and physical reasoning about such objects. To address this limitation, we propose PhysObjects, an object-centric dataset of 39.6K crowd-sourced and 417K automated physical concept annotations of common household objects. We demonstrate that fine-tuning a VLM on PhysObjects improves its understanding of physical object concepts, including generalization to held-out concepts, by capturing human priors of these concepts from visual appearance. We incorporate this physically grounded VLM in an interactive framework with a large language model-based robotic planner, and show improved planning performance on tasks that require reasoning about physical object concepts, compared to baselines that do not leverage physically grounded VLMs. We additionally illustrate the benefits of our physically grounded VLM on a real robot, where it improves task success rates. We release our dataset and provide further details and visualizations of our results at https://iliad.stanford.edu/pg-vlm/.
翻译:近期视觉-语言模型(VLM)的进展提升了其在视觉问答与图像描述等任务上的表现。因此,这些模型当前已具备对物理世界进行推理的能力,特别是在机器人操作等领域。然而,现有VLM对常见物体的物理概念(如材质、易碎性)理解有限,这限制了其在涉及物体交互与物理推理的机器人操作任务中的实用性。为解决该局限,我们提出PhysObjects数据集,包含39.6万条众包标注与41.7万条自动化标注的常见家居物体物理概念。研究表明,通过在该数据集上微调VLM,模型通过捕捉人类对物理概念外观的先验知识,能够提升对物体物理概念的理解能力,包括对未训练概念的泛化。我们将此物理具身VLM融入基于大语言模型的机器人规划交互框架中,实验表明,相较于未使用物理具身VLM的基线方法,该方法在需推理物体物理概念的任务上实现了更优的规划性能。我们进一步在真实机器人平台上验证了物理具身VLM的效益,其有效提升了任务成功率。相关数据集、详细结果与可视化内容已发布于https://iliad.stanford.edu/pg-vlm/。