Continual learning, especially class-incremental learning (CIL), on the basis of a pre-trained model (PTM) has garnered substantial research interest in recent years. However, how to effectively learn both discriminative and comprehensive feature representations while maintaining stability and plasticity over very long task sequences remains an open problem. We propose CaRE, a scalable {C}ontinual Le{a}rner with efficient Bi-Level {R}outing Mixture-of-{E}xperts (BR-MoE). The core idea of BR-MoE is a bi-level routing mechanism: a router selection stage that dynamically activates relevant task-specific routers, followed by an expert routing phase that dynamically activates and aggregates experts, aiming to inject discriminative and comprehensive representations into every intermediate network layer. On the other hand, we introduce a challenging evaluation protocol for comprehensively assessing CIL methods across very long task sequences spanning hundreds of tasks. Extensive experiments show that CaRE demonstrates leading performance across a variety of datasets and task settings, including commonly used CIL datasets with classical CIL settings (e.g., 5-20 tasks). To the best of our knowledge, CaRE is the first continual learner that scales to very long task sequences (ranging from 100 to over 300 non-overlapping tasks), while outperforming all baselines by a large margin on such task sequences. Code will be publicly released at https://github.com/LMMMEng/CaRE.git.
翻译:基于预训练模型(PTM)的持续学习,特别是类增量学习(CIL),近年来引起了广泛的研究兴趣。然而,如何在极长的任务序列上,有效学习兼具判别性和全面性的特征表示,同时保持模型的稳定性与可塑性,仍然是一个开放性问题。我们提出了CaRE,一个可扩展的持续学习器,其核心是高效的**双层路由专家混合**(BR-MoE)。BR-MoE的核心思想是一个双层路由机制:首先是一个路由器选择阶段,动态激活相关的任务特定路由器;随后是一个专家路由阶段,动态激活并聚合专家,旨在将判别性和全面性的表示注入到网络的每一个中间层。另一方面,我们引入了一个具有挑战性的评估协议,用于在跨越数百个任务的极长任务序列上全面评估CIL方法。大量实验表明,CaRE在多种数据集和任务设置下均表现出领先的性能,包括经典CIL设置(例如5-20个任务)下常用的CIL数据集。据我们所知,CaRE是首个能够扩展到极长任务序列(从100到超过300个非重叠任务)的持续学习器,并且在此类任务序列上以显著优势超越所有基线方法。代码将在 https://github.com/LMMMEng/CaRE.git 公开发布。