Continual learning (CL) enables deep neural networks to adapt to ever-changing data distributions. In practice, there may be scenarios where annotation is costly, leading to active continual learning (ACL), which performs active learning (AL) for the CL scenarios when reducing the labeling cost by selecting the most informative subset is preferable. However, conventional AL strategies are not suitable for ACL, as they focus solely on learning the new knowledge, leading to catastrophic forgetting of previously learned tasks. Therefore, ACL requires a new AL strategy that can balance the prevention of catastrophic forgetting and the ability to quickly learn new tasks. In this paper, we propose AccuACL, Accumulated informativeness-based Active Continual Learning, by the novel use of the Fisher information matrix as a criterion for sample selection, derived from a theoretical analysis of the Fisher-optimality preservation properties within the framework of ACL, while also addressing the scalability issue of Fisher information-based AL. Extensive experiments demonstrate that AccuACL significantly outperforms AL baselines across various CL algorithms, increasing the average accuracy and forgetting by 23.8% and 17.0%, respectively, in average.
翻译:持续学习(CL)使深度神经网络能够适应不断变化的数据分布。在实践中,可能存在标注成本高昂的场景,这催生了主动持续学习(ACL),即在CL场景中执行主动学习(AL),通过选择信息量最大的子集来降低标注成本。然而,传统的AL策略并不适用于ACL,因为它们仅专注于学习新知识,导致对先前学习任务的灾难性遗忘。因此,ACL需要一种新的AL策略,能够平衡防止灾难性遗忘与快速学习新任务的能力。本文提出AccuACL——基于累积信息量的主动持续学习方法,通过创新性地使用Fisher信息矩阵作为样本选择标准,该方法源于对ACL框架内Fisher最优性保持特性的理论分析,同时解决了基于Fisher信息的AL的可扩展性问题。大量实验表明,AccuACL在各种CL算法中显著优于AL基线方法,平均准确率平均提升了23.8%,遗忘率平均降低了17.0%。