Pretraining Vision-Language-Action (VLA) policies on internet-scale video is appealing, yet current latent-action objectives often learn the wrong thing: they remain anchored to pixel variation rather than action-relevant state transitions, making them vulnerable to appearance bias, nuisance motion, and information leakage. We introduce VLA-JEPA, a JEPA-style pretraining framework that sidesteps these pitfalls by design. The key idea is leakage-free state prediction: a target encoder produces latent representations from future frames, while the student pathway sees only the current observation -- future information is used solely as supervision targets, never as input. By predicting in latent space rather than pixel space, VLA-JEPA learns dynamics abstractions that are robust to camera motion and irrelevant background changes. This yields a simple two-stage recipe -- JEPA pretraining followed by action-head fine-tuning -- without the multi-stage complexity of prior latent-action pipelines. Experiments on LIBERO, LIBERO-Plus, SimplerEnv and real-world manipulation tasks show that VLA-JEPA achieves consistent gains in generalization and robustness over existing methods.
翻译:在互联网规模视频数据上预训练视觉-语言-动作(VLA)策略具有显著优势,然而当前基于潜在动作的优化目标往往学习到错误内容:它们仍受限于像素变化而非与动作相关的状态转移,导致其易受外观偏差、干扰运动及信息泄漏的影响。本文提出VLA-JEPA,一种基于联合嵌入预测架构(JEPA)的预训练框架,其设计从根本上规避了上述缺陷。核心思想在于无泄漏状态预测:目标编码器从未来帧生成潜在表示,而学生路径仅接收当前观测信息——未来帧仅作为监督目标,从不作为输入。通过在潜在空间而非像素空间进行预测,VLA-JEPA学习到的动态抽象对相机运动和无关背景变化具有鲁棒性。由此形成简洁的两阶段方案——JEPA预训练后接动作头微调——无需先前潜在动作流程的多阶段复杂性。在LIBERO、LIBERO-Plus、SimplerEnv仿真环境及真实世界操控任务上的实验表明,VLA-JEPA在泛化能力与鲁棒性方面均较现有方法取得持续提升。