It is fundamentally challenging for robots to serve as useful assistants in human environments because this requires addressing a spectrum of sub-problems across robotics, including perception, language understanding, reasoning, and planning. The recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated their exceptional abilities in solving complex mathematical problems, mastering commonsense and abstract reasoning. This has led to the recent utilization of MLLMs as the brain in robotic systems, enabling these models to conduct high-level planning prior to triggering low-level control actions for task execution. However, it remains uncertain whether existing MLLMs are reliable in serving the brain role of robots. In this study, we introduce the first benchmark for evaluating Multimodal LLM for Robotic (MMRo) benchmark, which tests the capability of MLLMs for robot applications. Specifically, we identify four essential capabilities perception, task planning, visual reasoning, and safety measurement that MLLMs must possess to qualify as the robot's central processing unit. We have developed several scenarios for each capability, resulting in a total of 14 metrics for evaluation. We present experimental results for various MLLMs, including both commercial and open-source models, to assess the performance of existing systems. Our findings indicate that no single model excels in all areas, suggesting that current MLLMs are not yet trustworthy enough to serve as the cognitive core for robots. Our data can be found in https://mm-robobench.github.io/.
翻译:机器人在人类环境中充当实用助手面临根本性挑战,因为这需要解决机器人学领域的一系列子问题,包括感知、语言理解、推理与规划。近期多模态大语言模型(MLLMs)的进展展示了其在解决复杂数学问题、掌握常识与抽象推理方面的卓越能力,这促使研究者开始将MLLMs作为机器人系统的“大脑”,使模型能在触发底层控制动作执行任务前进行高层规划。然而,现有MLLMs是否可靠胜任机器人的大脑角色仍不明确。本研究首次提出面向机器人应用的多模态大语言模型评估基准(MMRo),用于检验MLLMs在机器人应用中的能力。具体而言,我们识别出MLLMs要成为机器人中央处理单元必须具备的四项核心能力:感知、任务规划、视觉推理与安全评估。我们为每项能力设计了多个测试场景,共形成14项评估指标。通过对包括商业模型与开源模型在内的多种MLLMs进行实验,我们评估了现有系统的表现。研究结果表明,目前没有单一模型能在所有领域表现优异,这暗示当前MLLMs尚不足以作为可信赖的机器人认知核心。我们的数据公开于https://mm-robobench.github.io/。