Recent advances in large vision-language models (VLMs) have demonstrated generalizable open-vocabulary perception and reasoning, yet their real-robot manipulation capability remains unclear for long-horizon, closed-loop execution in unstructured, in-the-wild environments. Prior VLM-based manipulation pipelines are difficult to compare across different research groups' setups, and many evaluations rely on simulation, privileged state, or specially designed setups. We present AgenticLab, a model-agnostic robot agent platform and benchmark for open-world manipulation. AgenticLab provides a closed-loop agent pipeline for perception, task decomposition, online verification, and replanning. Using AgenticLab, we benchmark state-of-the-art VLM-based agents on real-robot tasks in unstructured environments. Our benchmark reveals several failure modes that offline vision-language tests (e.g., VQA and static image understanding) fail to capture, including breakdowns in multi-step grounding consistency, object grounding under occlusion and scene changes, and insufficient spatial reasoning for reliable manipulation. We will release the full hardware and software stack to support reproducible evaluation and accelerate research on general-purpose robot agents.
翻译:近期大规模视觉语言模型(VLMs)的发展已展现出可泛化的开放词汇感知与推理能力,然而其在非结构化、真实世界环境中进行长时程、闭环执行的真实机器人操作能力仍不明确。现有的基于VLM的操作流程难以在不同研究团队的实验设置间进行比较,且多数评估依赖于仿真、特权状态或特殊设计的实验环境。本文提出AgenticLab,一个面向开放世界操作的模型无关机器人智能体平台与基准测试框架。AgenticLab提供包含感知、任务分解、在线验证与重规划的闭环智能体流程。基于AgenticLab,我们在非结构化环境的真实机器人任务上对最先进的基于VLM的智能体进行了基准测试。我们的基准测试揭示了离线视觉语言测试(例如视觉问答与静态图像理解)未能捕捉的若干故障模式,包括多步指称一致性失效、遮挡与场景变化下的物体定位困难,以及空间推理能力不足以支撑可靠操作等问题。我们将公开完整的硬件与软件栈,以支持可复现的评估,并加速通用机器人智能体的研究。