Existing replay and distillation-based class-incremental learning (CIL) methods are effective at retaining past knowledge but are still constrained by the stability-plasticity dilemma. Since their resulting models are learned over a sequence of incremental tasks, they encode rich representations and can be regarded as pre-trained bases. Building on this view, we propose a plug-in extension paradigm termed Deployment of LoRA Components (DLC) to enhance them. For each task, we use Low-Rank Adaptation (LoRA) to inject task-specific residuals into the base model's deep layers. During inference, representations with task-specific residuals are aggregated to produce classification predictions. To mitigate interference from non-target LoRA plugins, we introduce a lightweight weighting unit. This unit learns to assign importance scores to different LoRA-tuned representations. Like downloadable content in software, DLC serves as a plug-and-play enhancement that efficiently extends the base methods. Remarkably, on the large-scale ImageNet-100, with merely 4\% of the parameters of a standard ResNet-18, our DLC model achieves a significant 8\% improvement in accuracy, demonstrating exceptional efficiency. Under a fixed memory budget, methods equipped with DLC surpass state-of-the-art expansion-based methods.
翻译:现有的基于回放与蒸馏的类增量学习方法虽能有效保留历史知识,但仍受制于稳定性-可塑性困境。由于此类方法所得模型是通过一系列增量任务学习得到的,它们编码了丰富的表征,可被视为预训练基础模型。基于这一视角,我们提出一种称为“低秩适配组件部署”的插件式扩展范式以增强现有方法。针对每个任务,我们采用低秩适配技术将任务特定的残差注入基础模型的深层。在推理过程中,带有任务特定残差的表征将被聚合以生成分类预测。为减轻非目标LoRA插件的干扰,我们引入了一个轻量级加权单元。该单元通过学习为不同LoRA调优的表征分配重要性权重。如同软件中的可下载内容,DLC作为一种即插即用增强模块,能高效扩展基础方法。值得注意的是,在大规模ImageNet-100数据集上,仅使用标准ResNet-18参数量4%的DLC模型实现了8%的显著精度提升,展现出卓越的效率。在固定内存预算下,配备DLC的方法超越了当前最先进的基于扩展的方法。