Continual Reinforcement Learning (CRL) for Vision-Language-Action (VLA) models is a promising direction toward self-improving embodied agents that can adapt in openended, evolving environments. However, conventional wisdom from continual learning suggests that naive Sequential Fine-Tuning (Seq. FT) leads to catastrophic forgetting, necessitating complex CRL strategies. In this work, we take a step back and conduct a systematic study of CRL for large pretrained VLAs across three models and five challenging lifelong RL benchmarks. We find that, contrary to established belief, simple Seq. FT with low-rank adaptation (LoRA) is remarkably strong: it achieves high plasticity, exhibits little to no forgetting, and retains strong zero-shot generalization, frequently outperforming more sophisticated CRL methods. Through detailed analysis, we show that this robustness arises from a synergy between the large pretrained model, parameter-efficient adaptation, and on-policy RL. Together, these components reshape the stability-plasticity trade-off, making continual adaptation both stable and scalable. Our results position Sequential Fine-Tuning as a powerful method for continual RL with VLAs and provide new insights into lifelong learning in the large model era. Code is available at github.com/UT-Austin-RobIn/continual-vla-rl.
翻译:视觉-语言-动作模型的持续强化学习是迈向自我改进具身智能体的重要方向,使其能够在开放、动态的环境中持续适应。然而,持续学习领域的传统观点认为,简单的顺序微调会导致灾难性遗忘,因此需要复杂的持续强化学习策略。本研究回归基础,对大规模预训练视觉-语言-动作模型在三种模型架构和五个具有挑战性的终身强化学习基准上进行了系统性研究。我们发现,与既有认知相反,采用低秩自适应的简单顺序微调表现出惊人的优势:它具有高可塑性、几乎不发生遗忘,且保持强大的零样本泛化能力,其表现经常超越更复杂的持续强化学习方法。通过深入分析,我们证明这种鲁棒性源于大规模预训练模型、参数高效自适应和在线强化学习三者之间的协同效应。这些组件共同重塑了稳定性与可塑性之间的权衡关系,使得持续适应既稳定又可扩展。我们的研究确立了顺序微调作为视觉-语言-动作模型持续强化学习的有效方法,并为大模型时代的终身学习机制提供了新的见解。代码已发布于 github.com/UT-Austin-RobIn/continual-vla-rl。