Large language models (LLMs) often suffer from catastrophic forgetting in continual learning (CL) scenarios, where performance on previously learned tasks degrades severely while training on sequentially arriving tasks. Although pioneering CL approaches using orthogonal subspaces can mitigate task interference, they typically employ fixed budget allocation, neglecting the varying complexity across tasks and layers. Besides, recent budget-adaptive tuning methods for LLMs often adopt multi-stage paradigms that decouple optimization and budget allocation. Such decoupling results in potential misalignment, which hinders those approaches' practical application in CL scenarios. To address these limitations, we propose OA-Adapter, a novel parameter-efficient approach for continual learning in LLMs that unifies dynamic budget adaptation with orthogonal subspace learning in an end-to-end training stage. Specifically, OA-Adapter introduces a dynamic bottleneck dimension adaptation mechanism that simultaneously allocates an efficient parameter budget and optimizes task objectives without misalignment.To effectively preserve previously acquired knowledge while coordinating with the dynamic budget allocation, orthogonal constraints are applied specifically between the parameter subspace of the current task and the dynamically allocated parameter subspaces of historical tasks. Experimental results on continual learning benchmarks demonstrate that OA-Adapter outperforms state-of-the-art methods in both accuracy and parameter efficiency. OA-Adapter achieves higher average accuracy while using 58.5% fewer parameters on the standard CL benchmark, and maintains its advantages on two larger benchmarks comprising 15 tasks.
翻译:大语言模型在持续学习场景中常遭受灾难性遗忘,即在顺序处理新任务时,先前习得任务的性能会急剧下降。尽管采用正交子空间的先驱性持续学习方法能够缓解任务间干扰,但这些方法通常采用固定的预算分配策略,忽略了不同任务与网络层间的复杂度差异。此外,近期面向大语言模型的预算自适应调优方法多采用多阶段范式,将优化过程与预算分配解耦。这种解耦可能导致潜在的对齐偏差,从而阻碍此类方法在持续学习场景中的实际应用。为应对这些局限,本文提出OA-Adapter——一种面向大语言模型持续学习的参数高效新方法,该方法在端到端训练阶段将动态预算自适应与正交子空间学习相统一。具体而言,OA-Adapter引入动态瓶颈维度自适应机制,在避免对齐偏差的同时,同步实现高效参数预算分配与任务目标优化。为在协调动态预算分配的同时有效保持已获得的知识,我们在当前任务的参数子空间与历史任务的动态分配参数子空间之间专门施加了正交约束。在持续学习基准测试上的实验结果表明,OA-Adapter在准确率与参数效率方面均优于现有先进方法。在标准持续学习基准上,OA-Adapter以减少58.5%的参数实现了更高的平均准确率,并在包含15个任务的两个更大规模基准上保持了其优势。