Vision-Language Action (VLA) models significantly advance robotic manipulation by leveraging the strong perception capabilities of pretrained vision-language models (VLMs). By integrating action modules into these pretrained models, VLA methods exhibit improved generalization. However, training them from scratch is costly. In this work, we propose a simple yet effective distillation-based framework that equips VLMs with action-execution capability by transferring knowledge from pretrained small action models. Our architecture retains the original VLM structure, adding only an action token and a state encoder to incorporate physical inputs. To distill action knowledge, we adopt a two-stage training strategy. First, we perform lightweight alignment by mapping VLM hidden states into the action space of the small action model, enabling effective reuse of its pretrained action decoder and avoiding expensive pretraining. Second, we selectively fine-tune the language model, state encoder, and action modules, enabling the system to integrate multimodal inputs with precise action generation. Specifically, the action token provides the VLM with a direct handle for predicting future actions, while the state encoder allows the model to incorporate robot dynamics not captured by vision alone. This design yields substantial efficiency gains over training large VLA models from scratch. Compared with previous state-of-the-art methods, our method achieves 97.3% average success rate on LIBERO (11.8% improvement) and 93.5% on LIBERO-LONG (24.5% improvement). In real-world experiments across five manipulation tasks, our method consistently outperforms the teacher model, achieving 82.0% success rate (17% improvement), which demonstrate that action distillation effectively enables VLMs to generate precise actions while substantially reducing training costs.
翻译:视觉语言动作(VLA)模型通过利用预训练视觉语言模型(VLM)强大的感知能力,显著推进了机器人操控领域的发展。通过将动作模块集成到这些预训练模型中,VLA方法展现出更好的泛化性能。然而,从头开始训练这些模型成本高昂。在本工作中,我们提出了一种简单而有效的基于蒸馏的框架,通过从预训练的小型动作模型中迁移知识,使VLM具备动作执行能力。我们的架构保留了原始VLM结构,仅添加了一个动作标记和一个用于整合物理输入的状态编码器。为了蒸馏动作知识,我们采用了两阶段训练策略。首先,我们通过将VLM隐藏状态映射到小型动作模型的动作空间来执行轻量级对齐,从而有效复用其预训练的动作解码器并避免昂贵的预训练。其次,我们选择性地微调语言模型、状态编码器和动作模块,使系统能够整合多模态输入并生成精确动作。具体而言,动作标记为VLM提供了预测未来动作的直接接口,而状态编码器使模型能够整合仅凭视觉无法捕捉的机器人动态信息。这种设计相比从头训练大型VLA模型带来了显著的效率提升。与先前最先进的方法相比,我们的方法在LIBERO基准上实现了97.3%的平均成功率(提升11.8%),在LIBERO-LONG基准上达到93.5%(提升24.5%)。在涵盖五项操控任务的真实世界实验中,我们的方法持续超越教师模型,取得了82.0%的成功率(提升17%),这表明动作蒸馏能有效促使VLM生成精确动作,同时大幅降低训练成本。