Pretraining Vision-Language-Action (VLA) policies on internet-scale video is appealing, yet current latent-action objectives often learn the wrong thing: they remain anchored to pixel variation rather than action-relevant state transitions, making them vulnerable to appearance bias, nuisance motion, and information leakage. We introduce VLA-JEPA, a JEPA-style pretraining framework that sidesteps these pitfalls by design. The key idea is \emph{leakage-free state prediction}: a target encoder produces latent representations from future frames, while the student pathway sees only the current observation -- future information is used solely as supervision targets, never as input. By predicting in latent space rather than pixel space, VLA-JEPA learns dynamics abstractions that are robust to camera motion and irrelevant background changes. This yields a simple two-stage recipe -- JEPA pretraining followed by action-head fine-tuning -- without the multi-stage complexity of prior latent-action pipelines. Experiments on LIBERO, LIBERO-Plus, SimplerEnv and real-world manipulation tasks show that VLA-JEPA achieves consistent gains in generalization and robustness over existing methods.
翻译:基于互联网规模视频预训练视觉-语言-动作(VLA)策略具有显著优势,然而当前潜在动作目标往往学习到错误内容:它们仍受限于像素变化而非与动作相关的状态转移,使其易受外观偏差、干扰运动和信息泄漏的影响。我们提出VLA-JEPA,一种JEPA风格的预训练框架,其设计从根本上规避了这些缺陷。核心思想是**无泄漏状态预测**:目标编码器从未来帧生成潜在表示,而学生路径仅接收当前观测——未来信息仅作为监督目标,从不作为输入。通过在潜在空间而非像素空间进行预测,VLA-JEPA学习到的动态抽象对相机运动和无关背景变化具有鲁棒性。这形成了一种简单的两阶段方案——JEPA预训练后接动作头微调——无需先前潜在动作流程的多阶段复杂性。在LIBERO、LIBERO-Plus、SimplerEnv及真实世界操作任务上的实验表明,VLA-JEPA在泛化性与鲁棒性上均较现有方法取得持续提升。