A long-standing goal in robotics is a generalist policy that can be deployed zero-shot on new robot embodiments without per-embodiment adaptation. Despite large-scale multi-embodiment pre-training, existing Vision-Language-Action models (VLAs) remain tightly coupled to their training embodiments and typically require costly fine-tuning. We introduce Language-Action Pre-training (LAP), a simple recipe that represents low-level robot actions directly in natural language, aligning action supervision with the pre-trained vision-language model's input-output distribution. LAP requires no learned tokenizer, no costly annotation, and no embodiment-specific architectural design. Based on LAP, we present LAP-3B, which to the best of our knowledge is the first VLA to achieve substantial zero-shot transfer to previously unseen robot embodiments without any embodiment-specific fine-tuning. Across multiple novel robots and manipulation tasks, LAP-3B attains over 50% average zero-shot success, delivering roughly a 2x improvement over the strongest prior VLAs. We further show that LAP enables efficient adaptation and favorable scaling, while unifying action prediction and VQA in a shared language-action format that yields additional gains through co-training.
翻译:机器人学的一个长期目标是实现通用策略,能够零样本部署于新的机器人具身形态而无需针对每个具身进行适配。尽管存在大规模多具身预训练,现有的视觉-语言-动作模型(VLAs)仍与其训练具身紧密耦合,通常需要昂贵的微调。我们提出语言-动作预训练(LAP),这是一种将底层机器人动作直接表示为自然语言的简单方案,使动作监督与预训练视觉-语言模型的输入-输出分布对齐。LAP无需学习分词器、无需昂贵标注,且无需针对特定具身的架构设计。基于LAP,我们提出LAP-3B模型,据我们所知,这是首个无需任何具身特定微调即可在先前未见过的机器人具身上实现实质性零样本迁移的VLA。在多种新型机器人与操作任务中,LAP-3B实现了超过50%的平均零样本成功率,较先前最强VLAs提升约2倍。我们进一步证明,LAP支持高效适配与良好的扩展性,同时将动作预测与视觉问答统一于共享的语言-动作格式中,通过协同训练带来额外增益。