Meta-learning has emerged as a powerful approach for leveraging knowledge from previous tasks to solve new tasks. The mainstream methods focus on training a well-generalized model initialization, which is then adapted to different tasks with limited data and updates. However, it pushes the model overfitting on the training tasks. Previous methods mainly attributed this to the lack of data and used augmentations to address this issue, but they were limited by sufficient training and effective augmentation strategies. In this work, we focus on the more fundamental ``learning to learn'' strategy of meta-learning to explore what causes errors and how to eliminate these errors without changing the environment. Specifically, we first rethink the algorithmic procedure of meta-learning from a ``learning'' lens. Through theoretical and empirical analyses, we find that (i) this paradigm faces the risk of both overfitting and underfitting and (ii) the model adapted to different tasks promote each other where the effect is stronger if the tasks are more similar. Based on this insight, we propose using task relations to calibrate the optimization process of meta-learning and propose a plug-and-play method called Task Relation Learner (TRLearner) to achieve this goal. Specifically, it first obtains task relation matrices from the extracted task-specific meta-data. Then, it uses the obtained matrices with relation-aware consistency regularization to guide optimization. Extensive theoretical and empirical analyses demonstrate the effectiveness of TRLearner.
翻译:元学习已成为一种利用先前任务知识解决新任务的强大方法。主流方法侧重于训练一个泛化良好的模型初始化,随后通过有限的数据和更新将其适配到不同任务。然而,这种方法会导致模型在训练任务上过拟合。先前的研究主要将此归因于数据不足,并通过数据增强来解决该问题,但这些方法受限于充分的训练和有效的增强策略。在本研究中,我们聚焦于元学习中更根本的“学会学习”策略,以探究错误产生的根源,以及如何在不变更环境的情况下消除这些错误。具体而言,我们首先从“学习”视角重新审视元学习的算法流程。通过理论与实证分析,我们发现:(i)该范式同时面临过拟合与欠拟合的风险;(ii)适配至不同任务的模型会相互促进,且任务相似度越高,这种促进作用越强。基于此洞见,我们提出利用任务关系来校准元学习的优化过程,并设计了一种即插即用方法——任务关系学习器(TRLearner)以实现该目标。具体而言,该方法首先从提取的任务特定元数据中获取任务关系矩阵,随后结合关系感知一致性正则化,利用所得矩阵指导优化过程。大量理论与实证分析证明了TRLearner的有效性。