The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA.
翻译:多模态大语言模型(MLLMs)与机器人系统的集成显著增强了机器人解释并执行自然语言指令的能力。尽管取得这些进展,传统MLLMs通常基于通用图文对进行训练,缺乏可操作性(affordance)和物理知识等关键机器人学知识,这限制了其在操作任务中的效果。为弥合这一差距,我们提出ManipVQA——一种新颖框架,旨在通过视觉问答(VQA)格式赋予MLLMs以操作为中心的知识。该方法不仅涵盖工具检测与可操作性识别,还延伸至对物理概念的全面理解。我们首先收集一组展示交互对象的多样化图像,这些图像在工具对象检测、可操作性及物理概念预测方面提出了广泛挑战。为将这种机器人特定知识与MLLMs固有的视觉推理能力无缝整合,我们采用统一的VQA格式,并设计了一种微调策略,该策略在保留原有视觉推理能力的同时融入新的机器人洞见。在机器人模拟器及多种视觉任务基准上的实证评估表明,ManipVQA具有稳健性能。代码与数据集将在https://github.com/SiyuanHuang95/ManipVQA公开提供。