In continual learning (CL), model growth enhances adaptability over new data, improving knowledge retention for more tasks. However, improper model growth can lead to severe degradation of previously learned knowledge, an issue we name as growth-induced forgetting (GIFt), especially in task-agnostic CL using entire grown model for inference. Existing works, despite adopting model growth and random initialization for better adaptability, often fail to recognize the presence of GIFt caused by improper model growth. This oversight limits comprehensive control of forgetting and hinders full utilization of model growth. We are the first in CL to identify this issue and conduct an in-depth study on root cause of GIFt, where layer expansion stands out among model growth strategies, widening layers without affecting model functionality. Yet, direct adoption of layer expansion presents challenges. It lacks data-driven control and initialization of expanded parameters to balance adaptability and knowledge retention. This paper presents a novel SparseGrow approach to overcome the issue of GIFt while enhancing adaptability over new data. SparseGrow employs data-driven sparse layer expansion to control efficient parameter usage during growth, reducing GIFt from excessive growth and functionality changes. It also combines sparse growth with on-data initialization at training late-stage to create partially 0-valued expansions that fit learned distribution, enhancing retention and adaptability. To further minimize forgetting, freezing is applied by calculating the sparse mask, allowing data-driven preservation of important parameters. Through experiments across datasets with various settings, cases, and task numbers, we demonstrate the necessity of layer expansion and showcase the effectiveness of SparseGrow in overcoming GIFt, highlighting its adaptability and knowledge retention for incremental tasks.
翻译:在持续学习(CL)中,模型增长增强了模型对新数据的适应能力,从而提升了更多任务的知识保持能力。然而,不当的模型增长可能导致先前学习知识的严重退化,这一问题我们称之为增长引发的遗忘(GIFt),尤其是在使用整个增长模型进行推理的任务无关持续学习中。现有研究尽管采用模型增长和随机初始化以获得更好的适应性,却常常未能认识到由不当模型增长引起的GIFt问题。这一疏忽限制了对遗忘的全面控制,并阻碍了模型增长潜力的充分发挥。我们首次在持续学习领域识别出该问题,并对GIFt的根本原因进行了深入研究,其中层扩展作为模型增长策略尤为突出,它能在不影响模型功能的前提下拓宽网络层。然而,直接采用层扩展存在挑战:它缺乏对扩展参数的数据驱动控制与初始化,难以平衡适应性与知识保持。本文提出了一种新颖的SparseGrow方法,旨在克服GIFt问题,同时增强模型对新数据的适应性。SparseGrow采用数据驱动的稀疏层扩展来控制增长过程中的参数高效使用,减少因过度增长和功能变化引发的GIFt。该方法还将稀疏增长与训练后期的数据驱动初始化相结合,创建部分零值扩展以拟合已学分布,从而提升知识保持与适应性。为进一步最小化遗忘,通过计算稀疏掩码实施参数冻结,实现数据驱动的重要参数保护。通过在多种设置、场景及任务数量的数据集上进行实验,我们验证了层扩展的必要性,并展示了SparseGrow在克服GIFt方面的有效性,突显了其在增量任务中兼具适应性与知识保持能力的优势。