Over the past decade, deep neural networks have demonstrated significant success using the training scheme that involves mini-batch stochastic gradient descent on extensive datasets. Expanding upon this accomplishment, there has been a surge in research exploring the application of neural networks in other learning scenarios. One notable framework that has garnered significant attention is meta-learning. Often described as "learning to learn," meta-learning is a data-driven approach to optimize the learning algorithm. Other branches of interest are continual learning and online learning, both of which involve incrementally updating a model with streaming data. While these frameworks were initially developed independently, recent works have started investigating their combinations, proposing novel problem settings and learning algorithms. However, due to the elevated complexity and lack of unified terminology, discerning differences between the learning frameworks can be challenging even for experienced researchers. To facilitate a clear understanding, this paper provides a comprehensive survey that organizes various problem settings using consistent terminology and formal descriptions. By offering an overview of these learning paradigms, our work aims to foster further advancements in this promising area of research.
翻译:过去十年间,深度神经网络通过在大型数据集上采用小批量随机梯度下降的训练方案取得了显著成功。基于这一成就,越来越多的研究开始探索神经网络在其他学习场景中的应用。其中备受关注的一个框架是元学习。元学习常被描述为“学会学习”,是一种通过数据驱动优化学习算法的方法。另外两个受到关注的分支是持续学习和在线学习,二者均涉及通过流式数据对模型进行增量更新。尽管这些框架最初是独立发展的,但近期研究已开始探索它们的结合,提出了新的问题设置和学习算法。然而,由于问题复杂度的提升和术语体系的不统一,即使是经验丰富的研究者也难以清晰辨析这些学习框架之间的差异。为促进理解,本文提供了一项系统性综述,采用统一的术语和形式化描述对各类问题设置进行梳理。通过对这些学习范式的全面概述,本研究旨在推动这一富有前景的研究领域取得进一步进展。