Online policy learning directly in the physical world is a promising yet challenging direction for embodied intelligence. Unlike simulation, real-world systems cannot be arbitrarily accelerated, cheaply reset, or massively replicated, which makes scalable data collection, heterogeneous deployment, and long-horizon effective training difficult. These challenges suggest that real-world policy learning is not only an algorithmic issue but fundamentally a systems problem. We present USER, a Unified and extensible SystEm for Real-world online policy learning. USER treats physical robots as first-class hardware resources alongside GPUs through a unified hardware abstraction layer, enabling automatic discovery, management, and scheduling of heterogeneous robots. To address cloud-edge communication, USER introduces an adaptive communication plane with tunneling-based networking, distributed data channels for traffic localization, and streaming-multiprocessor-aware weight synchronization to regulate GPU-side overhead. On top of this infrastructure, USER organizes learning as a fully asynchronous framework with a persistent, cache-aware buffer, enabling efficient long-horizon experiments with robust crash recovery and reuse of historical data. In addition, USER provides extensible abstractions for rewards, algorithms, and policies, supporting online imitation or reinforcement learning of CNN/MLP, generative policies, and large vision-language-action (VLA) models within a unified pipeline. Results in both simulation and the real world show that USER enables multi-robot coordination, heterogeneous manipulators, edge-cloud collaboration with large models, and long-running asynchronous training, offering a unified and extensible systems foundation for real-world online policy learning.
翻译:在物理世界中直接进行在线策略学习是具身智能领域一个前景广阔但充满挑战的方向。与仿真环境不同,现实世界系统无法被任意加速、低成本重置或大规模复制,这使得可扩展的数据收集、异构部署以及长时程有效训练变得困难。这些挑战表明,现实世界策略学习不仅是一个算法问题,本质上更是一个系统问题。我们提出了USER,一个用于现实世界在线策略学习的统一可扩展系统。USER通过统一的硬件抽象层,将物理机器人视为与GPU并列的一等硬件资源,实现了对异构机器人的自动发现、管理与调度。为应对云边通信挑战,USER引入了自适应通信平面,其具备基于隧道的网络、用于流量本地化的分布式数据通道,以及流式多处理器感知的权重同步机制,以调控GPU侧开销。在此基础设施之上,USER将学习组织为一个完全异步的框架,配备持久化且缓存感知的缓冲区,从而支持具有鲁棒崩溃恢复和历史数据复用的高效长时程实验。此外,USER为奖励函数、算法和策略提供了可扩展的抽象,支持在统一流程中对CNN/MLP、生成式策略以及大型视觉-语言-动作(VLA)模型进行在线模仿学习或强化学习。仿真与现实世界的实验结果表明,USER能够实现多机器人协同、异构机械臂操控、基于大模型的边云协作以及长时程异步训练,为现实世界在线策略学习提供了一个统一且可扩展的系统基础。