Continual Learning (CL) aims to learn in non-stationary scenarios, progressively acquiring and maintaining knowledge from sequential tasks. Recent Prompt-based Continual Learning (PCL) has achieved remarkable performance with Pre-Trained Models (PTMs). These approaches grow a prompt sets pool by adding a new set of prompts when learning each new task (\emph{prompt learning}) and adopt a matching mechanism to select the correct set for each testing sample (\emph{prompt retrieval}). Previous studies focus on the latter stage by improving the matching mechanism to enhance Prompt Retrieval Accuracy (PRA). To promote cross-task knowledge facilitation and form an effective and efficient prompt sets pool, we propose a plug-in module in the former stage to \textbf{Learn Whether to Grow (LW2G)} based on the disparities between tasks. Specifically, a shared set of prompts is utilized when several tasks share certain commonalities, and a new set is added when there are significant differences between the new task and previous tasks. Inspired by Gradient Projection Continual Learning, our LW2G develops a metric called Hinder Forward Capability (HFC) to measure the hindrance imposed on learning new tasks by surgically modifying the original gradient onto the orthogonal complement of the old feature space. With HFC, an automated scheme Dynamic Growing Approach adaptively learns whether to grow with a dynamic threshold. Furthermore, we design a gradient-based constraint to ensure the consistency between the updating prompts and pre-trained knowledge, and a prompts weights reusing strategy to enhance forward transfer. Extensive experiments show the effectiveness of our method. The source codes are available at \url{https://github.com/RAIAN08/LW2G}.
翻译:持续学习(Continual Learning, CL)旨在非平稳场景中逐步从顺序任务中获取并维持知识。近年来,基于提示的持续学习(Prompt-based Continual Learning, PCL)借助预训练模型(Pre-Trained Models, PTMs)取得了显著性能。这类方法在学习每个新任务时通过添加一组新提示来扩展提示集池(\emph{提示学习}),并采用匹配机制为每个测试样本选择正确的提示集(\emph{提示检索})。先前研究主要聚焦于后一阶段,通过改进匹配机制以提升提示检索准确率(Prompt Retrieval Accuracy, PRA)。为促进跨任务知识协同并构建高效且有效的提示集池,我们在前一阶段提出一个插件模块,基于任务间差异来\textbf{学习是否增长(Learn Whether to Grow, LW2G)}。具体而言,当多个任务存在共性时共享一组提示,当新任务与先前任务存在显著差异时则新增提示集。受梯度投影持续学习启发,我们的LW2G通过将原始梯度精准修正至旧特征空间的正交补空间,提出名为"阻碍前向能力(Hinder Forward Capability, HFC)"的度量指标来评估对新任务学习的阻碍程度。基于HFC,自动化方案"动态增长方法"通过动态阈值自适应学习是否扩展提示集。此外,我们设计了基于梯度的约束以确保更新提示与预训练知识的一致性,并提出提示权重复用策略以增强前向迁移能力。大量实验验证了本方法的有效性。源代码发布于\url{https://github.com/RAIAN08/LW2G}。