Deep neural networks (DNNs) have revolutionized artificial intelligence but often lack performance when faced with out-of-distribution (OOD) data, a common scenario due to the inevitable domain shifts in real-world applications. This limitation stems from the common assumption that training and testing data share the same distribution-an assumption frequently violated in practice. Despite their effectiveness with large amounts of data and computational power, DNNs struggle with distributional shifts and limited labeled data, leading to overfitting and poor generalization across various tasks and domains. Meta-learning presents a promising approach by employing algorithms that acquire transferable knowledge across various tasks for fast adaptation, eliminating the need to learn each task from scratch. This survey paper delves into the realm of meta-learning with a focus on its contribution to domain generalization. We first clarify the concept of meta-learning for domain generalization and introduce a novel taxonomy based on the feature extraction strategy and the classifier learning methodology, offering a granular view of methodologies. Through an exhaustive review of existing methods and underlying theories, we map out the fundamentals of the field. Our survey provides practical insights and an informed discussion on promising research directions, paving the way for future innovation in meta-learning for domain generalization.
翻译:深度神经网络(DNNs)彻底改变了人工智能,但在面对分布外(OOD)数据时往往性能欠佳,这种情境在现实应用中因不可避免的域偏移而普遍存在。该局限性源于训练数据和测试数据服从同一分布这一常见假设——该假设在实践中常被违背。尽管深度神经网络在大规模数据和计算能力支持下表现出色,但其在面临分布偏移和有限标注数据时仍显不足,易导致过拟合及跨任务与跨域的泛化能力低下。元学习提供了一种前景广阔的方法,通过采用能跨任务获取可迁移知识的算法实现快速自适应,无需从零学习每个任务。本综述论文深入探讨元学习领域,重点关注其对域泛化的贡献。我们首先阐明元学习在域泛化中的概念,并基于特征提取策略与分类器学习方法提出一种新颖的分类体系,从而提供细颗粒度的方法论视角。通过全面回顾现有方法与理论基础,我们描绘出该领域的基本框架。本综述提供了实践见解,并就有前景的研究方向展开深度讨论,为元学习在域泛化领域的未来创新铺平道路。