We introduce Wonderful Team, a multi-agent Vision Large Language Model (VLLM) framework designed to solve robotics problems in a zero-shot regime. In our context, zero-shot means that for a novel environment, we provide a VLLM with an image of the robot's surroundings and a task description, and the VLLM outputs the sequence of actions necessary for the robot to complete the task. Unlike prior work that requires fine-tuning parts of the pipeline -- such as adjusting an LLM on robot-specific data or training separate vision encoders -- our approach demonstrates that with careful engineering, a single off-the-shelf VLLM can autonomously handle all aspects of a robotics task, from high-level planning to low-level location extraction and action execution. Crucially, compared to using GPT-4o alone, Wonderful Team is self-corrective and capable of iteratively fixing its own mistakes, enabling it to solve challenging long-horizon tasks. We validate our framework through extensive experiments, both in simulated environments using VIMABench and in real-world settings. Our system showcases the ability to handle diverse tasks such as manipulation, goal-reaching, and visual reasoning -- all in a zero-shot manner. These results underscore a key point: vision-language models have progressed rapidly in the past year and should be strongly considered as a backbone for many robotics problems moving forward.
翻译:我们提出了Wonderful Team,一个多智能体视觉大语言模型(VLLM)框架,旨在零样本条件下解决机器人学问题。在我们的语境中,零样本意味着对于一个新的环境,我们向VLLM提供机器人周围环境的图像和任务描述,VLLM则输出机器人完成任务所需的一系列动作序列。与先前需要微调流程中部分组件(例如在机器人专用数据上调整LLM或训练独立的视觉编码器)的工作不同,我们的方法表明,通过精心的工程化设计,单个现成的VLLM能够自主处理机器人任务的所有方面,从高层规划到低层位置提取和动作执行。至关重要的是,与单独使用GPT-4o相比,Wonderful Team具备自我修正能力,能够迭代地修正自身错误,从而能够解决具有挑战性的长视野任务。我们通过在模拟环境(使用VIMABench)和真实世界场景中进行的大量实验验证了我们的框架。我们的系统展示了处理多样化任务的能力,例如操作、目标到达和视觉推理——全部以零样本方式实现。这些结果强调了一个关键点:视觉语言模型在过去一年中进展迅速,应被认真考虑作为未来许多机器人学问题的核心基础架构。