Continual learning aims to acquire tasks sequentially without catastrophic forgetting, yet standard strategies face a core tradeoff: regularization-based methods (e.g., EWC) can overconstrain updates when task optima are weakly overlapping, while replay-based methods can retain performance but drift due to imperfect replay. We study a hybrid perspective: \emph{trust region continual learning} that combines generative replay with a Fisher-metric trust region constraint. We show that, under local approximations, the resulting update admits a MAML-style interpretation with a single implicit inner step: replay supplies an old-task gradient signal (query-like), while the Fisher-weighted penalty provides an efficient offline curvature shaping (support-like). This yields an emergent meta-learning property in continual learning: the model becomes an initialization that rapidly \emph{re-converges} to prior task optima after each task transition, without explicitly optimizing a bilevel objective. Empirically, on task-incremental diffusion image generation and continual diffusion-policy control, trust region continual learning achieves the best final performance and retention, and consistently recovers early-task performance faster than EWC, replay, and continual meta-learning baselines.
翻译:持续学习旨在顺序获取任务而不发生灾难性遗忘,然而标准策略面临一个核心权衡:当任务最优解重叠程度较弱时,基于正则化的方法(如EWC)可能过度约束参数更新,而基于回放的方法虽能保持性能,却会因不完美的回放产生参数漂移。本文研究一种混合视角:\emph{信任区域持续学习},该方法将生成式回放与基于Fisher度量的信任区域约束相结合。我们证明,在局部近似下,所得更新可解释为一种仅含单步隐式内层更新的MAML式框架:回放提供旧任务的梯度信号(类查询作用),而Fisher加权惩罚项则提供高效的离线曲率塑造(类支持作用)。这使持续学习涌现出元学习特性:模型成为一个能在每次任务切换后快速\emph{重新收敛}至先前任务最优解的初始化点,而无需显式优化双层目标。实证研究表明,在任务增量式扩散图像生成与持续扩散策略控制任务上,信任区域持续学习取得了最佳最终性能与任务保持能力,并且相较于EWC、回放及持续元学习基线方法,能更稳定地快速恢复早期任务性能。