To cope with uncertain changes of the external world, intelligent systems must continually learn from complex, evolving environments and respond in real time. This ability, collectively known as general continual learning (GCL), encapsulates practical challenges such as online datastreams and blurry task boundaries. Although leveraging pretrained models (PTMs) has greatly advanced conventional continual learning (CL), these methods remain limited in reconciling the diverse and temporally mixed information along a single pass, resulting in sub-optimal GCL performance. Inspired by meta-plasticity and reconstructive memory in neuroscience, we introduce here an innovative approach named Meta Post-Refinement (MePo) for PTMs-based GCL. This approach constructs pseudo task sequences from pretraining data and develops a bi-level meta-learning paradigm to refine the pretrained backbone, which serves as a prolonged pretraining phase but greatly facilitates rapid adaptation of representation learning to downstream GCL tasks. MePo further initializes a meta covariance matrix as the reference geometry of pretrained representation space, enabling GCL to exploit second-order statistics for robust output alignment. MePo serves as a plug-in strategy that achieves significant performance gains across a variety of GCL benchmarks and pretrained checkpoints in a rehearsal-free manner (e.g., 15.10\%, 13.36\%, and 12.56\% on CIFAR-100, ImageNet-R, and CUB-200 under Sup-21/1K). Our source code is available at \href{https://github.com/SunGL001/MePo}{MePo}
翻译:为应对外部世界的不确定性变化,智能系统必须持续从复杂、动态演化的环境中学习并实时响应。这种能力统称为通用持续学习(GCL),其涵盖了在线数据流与模糊任务边界等实际挑战。尽管利用预训练模型(PTMs)已显著推进了传统持续学习(CL)的发展,现有方法在单次遍历中协调多样且时序混合信息的能力仍存在局限,导致GCL性能未达最优。受神经科学中元可塑性与重构记忆机制的启发,本文提出一种名为元后优化(MePo)的创新方法,用于基于PTMs的GCL。该方法通过预训练数据构建伪任务序列,并设计双层元学习范式来优化预训练骨干网络,此过程相当于延长的预训练阶段,但能显著促进表征学习在下游GCL任务中的快速适应。MePo进一步初始化元协方差矩阵作为预训练表征空间的参考几何结构,使GCL能够利用二阶统计量实现鲁棒的输出对齐。MePo作为一种即插即用策略,在免回放条件下(例如在CIFAR-100、ImageNet-R和CUB-200数据集上分别实现15.10%、13.36%和12.56%的性能提升,设置:Sup-21/1K),于多种GCL基准测试与预训练检查点中均取得显著性能增益。源代码发布于\href{https://github.com/SunGL001/MePo}{MePo}。