Lifelong learning is critical for embodied agents in open-world environments, where reinforcement learning fine-tuning has emerged as an important paradigm to enable Vision-Language-Action (VLA) models to master dexterous manipulation through environmental interaction. Thus, Continual Reinforcement Learning (CRL) is a promising pathway for deploying VLA models in lifelong robotic scenarios, yet balancing stability (retaining old skills) and plasticity (learning new ones) remains a formidable challenge for existing methods. We introduce CRL-VLA, a framework for continual post-training of VLA models with rigorous theoretical bounds. We derive a unified performance bound linking the stability-plasticity trade-off to goal-conditioned advantage magnitude, scaled by policy divergence. CRL-VLA resolves this dilemma via asymmetric regulation: constraining advantage magnitudes on prior tasks while enabling controlled growth on new tasks. This is realized through a simple but effective dual-critic architecture with novel Goal-Conditioned Value Formulation (GCVF), where a frozen critic anchors semantic consistency and a trainable estimator drives adaptation. Experiments on the LIBERO benchmark demonstrate that CRL-VLA effectively harmonizes these conflicting objectives, outperforming baselines in both anti-forgetting and forward adaptation.
翻译:终身学习对于开放世界环境中的具身智能体至关重要,其中强化学习微调已成为一种重要范式,旨在使视觉-语言-动作(VLA)模型通过环境交互掌握灵巧操作技能。因此,持续强化学习(CRL)是在终身机器人场景中部署VLA模型的一条有前景的路径,然而平衡稳定性(保留旧技能)与可塑性(学习新技能)对现有方法仍是巨大挑战。我们提出了CRL-VLA,这是一个用于对VLA模型进行持续后训练并具有严格理论界限的框架。我们推导了一个统一的性能界限,将稳定性-可塑性权衡与目标条件优势函数幅度相关联,并通过策略散度进行缩放。CRL-VLA通过非对称调控解决了这一困境:约束先前任务上的优势幅度,同时允许新任务上的受控增长。这通过一个简单而有效的双评论家架构与新颖的目标条件价值公式(GCVF)实现,其中冻结的评论家锚定语义一致性,而可训练的估计器驱动适应。在LIBERO基准测试上的实验表明,CRL-VLA有效协调了这些冲突目标,在抗遗忘和正向适应方面均优于基线方法。