Federated continual learning (FCL) allows distributed autonomous fleets to adapt collaboratively to evolving terrain types across extended mission lifecycles. However, current approaches face several key challenges: 1) they use uniform protection strategies that do not account for the varying sensitivities to forgetting on different network layers; 2) they focus primarily on preventing forgetting during training, without addressing the long-term effects of cumulative drift; and 3) they often depend on idealized simulations that fail to capture the real-world heterogeneity present in distributed fleets. In this paper, we propose a lifecycle-aware dual-timescale FCL framework that incorporates training-time (pre-forgetting) prevention and (post-forgetting) recovery. Under this framework, we design a layer-selective rehearsal strategy that mitigates immediate forgetting during local training, and a rapid knowledge recovery strategy that restores degraded models after long-term cumulative drift. We present a theoretical analysis that characterizes heterogeneous forgetting dynamics and establishes the inevitability of long-term degradation. Our experimental results show that this framework achieves up to 8.3\% mIoU improvement over the strongest federated baseline and up to 31.7\% over conventional fine-tuning. We also deploy the FCL framework on a real-world rover testbed to assess system-level robustness under realistic constraints; the testing results further confirm the effectiveness of our FCL design.
翻译:联邦持续学习(FCL)使得分布式自主车队能够在扩展的任务生命周期内,协作适应不断变化的地形类型。然而,当前方法面临若干关键挑战:1)它们使用统一保护策略,未考虑不同网络层对遗忘的敏感度差异;2)它们主要侧重于训练过程中防止遗忘,而未解决累积漂移的长期影响;3)它们通常依赖理想化仿真,未能捕捉分布式车队中真实存在的异构性。在本文中,我们提出一种生命周期感知的双时间尺度FCL框架,该框架融合了训练阶段(遗忘前)预防和(遗忘后)恢复机制。在此框架下,我们设计了一种分层选择性重放策略,以缓解局部训练中的即时遗忘,以及一种快速知识恢复策略,用于在长期累积漂移后恢复退化模型。我们通过理论分析刻画了异构遗忘动力学,并证明了长期退化的必然性。实验结果表明,该框架相较最强联邦基线方法实现了高达8.3%的mIoU提升,相较传统微调方法提升达31.7%。此外,我们在真实世界的移动机器人实验平台上部署了所提FCL框架,以评估其在现实约束下的系统级鲁棒性;测试结果进一步验证了我们FCL设计的有效性。