The integration of large language models (LLMs) with embodied agents has improved high-level reasoning capabilities; however, a critical gap remains between semantic understanding and physical execution. While vision-language-action (VLA) and vision-language-navigation (VLN) systems enable robots to perform manipulation and navigation tasks from natural language instructions, they still struggle with long-horizon sequential and temporally structured tasks. Existing frameworks typically adopt modular pipelines for data collection, skill training, and policy deployment, resulting in high costs in experimental validation and policy optimization. To address these limitations, we propose ROSClaw, an agent framework for heterogeneous robots that integrates policy learning and task execution within a unified vision-language model (VLM) controller. The framework leverages e-URDF representations of heterogeneous robots as physical constraints to construct a sim-to-real topological mapping, enabling real-time access to the physical states of both simulated and real-world agents. We further incorporate a data collection and state accumulation mechanism that stores robot states, multimodal observations, and execution trajectories during real-world execution, enabling subsequent iterative policy optimization. During deployment, a unified agent maintains semantic continuity between reasoning and execution, and dynamically assigns task-specific control to different agents, thereby improving robustness in multi-policy execution. By establishing an autonomous closed-loop framework, ROSClaw minimizes the reliance on robot-specific development workflows. The framework supports hardware-level validation, automated generation of SDK-level control programs, and tool-based execution, enabling rapid cross-platform transfer and continual improvement of robotic skills. Ours project page: https://www.rosclaw.io/.
翻译:大型语言模型(LLMs)与具身智能体的整合虽提升了高层推理能力,但语义理解与物理执行之间仍存在关键鸿沟。尽管视觉-语言-动作(VLA)与视觉-语言-导航(VLN)系统使机器人能够根据自然语言指令执行操作与导航任务,但在处理长时序、时间结构化任务时仍面临挑战。现有框架通常采用模块化流水线进行数据采集、技能训练与策略部署,导致实验验证与策略优化成本高昂。为突破上述局限,我们提出ROSClaw——一个面向异构机器人的智能体框架,将策略学习与任务执行整合于统一视觉-语言模型(VLM)控制器中。该框架利用异构机器人的e-URDF表达作为物理约束,构建仿真-现实拓扑映射,实现仿真与真实智能体物理状态的实时访问。我们进一步引入数据采集与状态积累机制,在真实世界执行过程中存储机器人状态、多模态观测与执行轨迹,支持后续迭代策略优化。部署阶段,统一智能体在推理与执行间维持语义连续性,并动态分配任务特异性控制至不同智能体,从而提升多策略执行的鲁棒性。通过建立自主闭环框架,ROSClaw最大程度减少了对机器人特异性开发工作流的依赖。该框架支持硬件级验证、SDK级控制程序自动生成及工具化执行,实现机器人技能的跨平台快速迁移与持续改进。项目页面:https://www.rosclaw.io/。