A long-standing goal in robotics is a generalist policy that can be deployed zero-shot on new robot embodiments without per-embodiment adaptation. Despite large-scale multi-embodiment pre-training, existing Vision-Language-Action models (VLAs) remain tightly coupled to their training embodiments and typically require costly fine-tuning. We introduce Language-Action Pre-training (LAP), a simple recipe that represents low-level robot actions directly in natural language, aligning action supervision with the pre-trained vision-language model's input-output distribution. LAP requires no learned tokenizer, no costly annotation, and no embodiment-specific architectural design. Based on LAP, we present LAP-3B, which to the best of our knowledge is the first VLA to achieve substantial zero-shot transfer to previously unseen robot embodiments without any embodiment-specific fine-tuning. Across multiple novel robots and manipulation tasks, LAP-3B attains over 50% average zero-shot success, delivering roughly a 2x improvement over the strongest prior VLAs. We further show that LAP enables efficient adaptation and favorable scaling, while unifying action prediction and VQA in a shared language-action format that yields additional gains through co-training.
翻译:机器人学中的一个长期目标是开发一种通用策略,能够零样本部署于新的机器人本体上,而无需针对每个本体进行适配。尽管存在大规模多本体预训练,现有的视觉-语言-动作模型(VLA)仍与其训练所用的机器人本体紧密耦合,通常需要成本高昂的微调。我们提出了语言-动作预训练(LAP),这是一种简单的方案,它将低层机器人动作直接表示为自然语言,使动作监督与预训练视觉-语言模型的输入-输出分布对齐。LAP无需学习分词器,无需昂贵的标注,也无需特定于本体的架构设计。基于LAP,我们提出了LAP-3B模型。据我们所知,这是首个能够在无需任何本体特定微调的情况下,对先前未见过的机器人本体实现实质性零样本迁移的VLA。在多个新型机器人和操作任务中,LAP-3B实现了超过50%的平均零样本成功率,相比之前最强的VLA提升了约2倍。我们进一步证明,LAP支持高效适配和有利的扩展性,同时将动作预测和视觉问答统一在一个共享的语言-动作格式中,通过协同训练带来额外收益。