Continual learning is the problem of integrating new information in a model while retaining the knowledge acquired in the past. Despite the tangible improvements achieved in recent years, the problem of continual learning is still an open one. A better understanding of the mechanisms behind the successes and failures of existing continual learning algorithms can unlock the development of new successful strategies. In this work, we view continual learning from the perspective of the multi-task loss approximation, and we compare two alternative strategies, namely local and global approximations. We classify existing continual learning algorithms based on the approximation used, and we assess the practical effects of this distinction in common continual learning settings.Additionally, we study optimal continual learning objectives in the case of local polynomial approximations and we provide examples of existing algorithms implementing the optimal objectives
翻译:持续学习旨在使模型在整合新信息的同时保留过去已习得的知识。尽管近年来已取得显著进展,持续学习问题仍未得到完全解决。深入理解现有持续学习算法成功与失败的内在机制,将有助于开发更有效的新策略。本文从多任务损失函数近似的视角审视持续学习,比较了局部近似与全局近似两种策略。我们依据所采用的近似方法对现有持续学习算法进行分类,并在典型持续学习场景中评估这种区分的实际影响。此外,我们研究了局部多项式近似情形下的最优持续学习目标,并列举了实现该目标的现有算法实例。