Class-incremental learning (CIL) aims to train a model to learn new classes from non-stationary data streams without forgetting old ones. In this paper, we propose a new kind of connectionist model by tailoring neural unit dynamics that adapt the behavior of neural networks for CIL. In each training session, it introduces a supervisory mechanism to guide network expansion whose growth size is compactly commensurate with the intrinsic complexity of a newly arriving task. This constructs a near-minimal network while allowing the model to expand its capacity when cannot sufficiently hold new classes. At inference time, it automatically reactivates the required neural units to retrieve knowledge and leaves the remaining inactivated to prevent interference. We name our model AutoActivator, which is effective and scalable. To gain insights into the neural unit dynamics, we theoretically analyze the model's convergence property via a universal approximation theorem on learning sequential mappings, which is under-explored in the CIL community. Experiments show that our method achieves strong CIL performance in rehearsal-free and minimal-expansion settings with different backbones.
翻译:类增量学习(CIL)旨在训练模型从非平稳数据流中学习新类别而不遗忘旧类别。本文通过定制神经单元动态提出一种新型连接主义模型,该模型可调整神经网络行为以适用于CIL。在每个训练阶段,模型引入监督机制指导网络扩展,其增长规模与新到达任务的内在复杂度保持紧凑匹配。这构建了接近最小化的网络,同时允许模型在无法充分容纳新类别时扩展容量。在推理阶段,模型自动激活所需神经单元以检索知识,并保持其余单元未激活状态以防止干扰。我们将该模型命名为AutoActivator,其具备高效性与可扩展性。为深入理解神经单元动态,我们通过序列映射学习的通用逼近定理对模型收敛性进行理论分析,这在CIL研究领域尚未得到充分探索。实验表明,该方法在不同骨干网络下,于无排练与最小扩展场景中均实现了优异的CIL性能。