Vision-Language-Action (VLA) models have shown remarkable potential in visuomotor control and instruction comprehension through end-to-end learning processes. However, current VLA models face significant challenges: they are slow during inference and require extensive pre-training on large amounts of robotic data, making real-world deployment difficult. In this paper, we introduce a new family of compact vision-language-action models, called TinyVLA, which offers two key advantages over existing VLA models: (1) faster inference speeds, and (2) improved data efficiency, eliminating the need for pre-training stage. Our framework incorporates two essential components to build TinyVLA: (1) initializing the policy backbone with robust, high-speed multimodal models, and (2) integrating a diffusion policy decoder during fine-tuning to enable precise robot actions. We conducted extensive evaluations of TinyVLA in both simulation and on real robots, demonstrating that our approach significantly outperforms the state-of-the-art VLA model, OpenVLA, in terms of speed and data efficiency, while delivering comparable or superior performance. Additionally, TinyVLA exhibits strong generalization capabilities across various dimensions, including language instructions, novel objects, unseen positions, changes in object appearance, background variations, and environmental shifts, often matching or exceeding the performance of OpenVLA. We believe that \methodname offers an interesting perspective on utilizing pre-trained multimodal models for policy learning. Our project is at https://tiny-vla.github.io.
翻译:视觉-语言-动作(VLA)模型通过端到端学习过程,在视觉运动控制和指令理解方面展现出显著潜力。然而,当前的VLA模型面临重大挑战:推理速度慢,且需要大量机器人数据进行广泛的预训练,这使得实际部署变得困难。本文中,我们引入了一个新的紧凑型视觉-语言-动作模型家族,称为TinyVLA,与现有VLA模型相比,它具有两个关键优势:(1)更快的推理速度,以及(2)更高的数据效率,无需预训练阶段。我们的框架包含两个核心组件来构建TinyVLA:(1)使用鲁棒、高速的多模态模型初始化策略主干网络,(2)在微调过程中集成扩散策略解码器以实现精确的机器人动作。我们在仿真和真实机器人上对TinyVLA进行了广泛评估,结果表明,我们的方法在速度和数据效率方面显著优于最先进的VLA模型OpenVLA,同时提供了相当或更优的性能。此外,TinyVLA在多个维度上展现出强大的泛化能力,包括语言指令、新物体、未见过的位置、物体外观变化、背景变化和环境变化,其性能通常与OpenVLA相当或更优。我们相信,\methodname 为利用预训练多模态模型进行策略学习提供了一个有趣的视角。我们的项目地址为 https://tiny-vla.github.io。