Online meta-learning has recently emerged as a marriage between batch meta-learning and online learning, for achieving the capability of quick adaptation on new tasks in a lifelong manner. However, most existing approaches focus on the restrictive setting where the distribution of the online tasks remains fixed with known task boundaries. In this work, we relax these assumptions and propose a novel algorithm for task-agnostic online meta-learning in non-stationary environments. More specifically, we first propose two simple but effective detection mechanisms of task switches and distribution shift based on empirical observations, which serve as a key building block for more elegant online model updates in our algorithm: the task switch detection mechanism allows reusing of the best model available for the current task at hand, and the distribution shift detection mechanism differentiates the meta model update in order to preserve the knowledge for in-distribution tasks and quickly learn the new knowledge for out-of-distribution tasks. In particular, our online meta model updates are based only on the current data, which eliminates the need of storing previous data as required in most existing methods. We further show that a sublinear task-averaged regret can be achieved for our algorithm under mild conditions. Empirical studies on three different benchmarks clearly demonstrate the significant advantage of our algorithm over related baseline approaches.
翻译:在线元学习作为批量元学习与在线学习的结合,近年来崭露头角,旨在实现终身持续快速适应新任务的能力。然而,现有方法大多局限于在线任务分布固定且任务边界已知的限制性场景。本研究放宽了这些假设,提出了一种适用于非平稳环境的任务无关在线元学习新算法。具体而言,我们首先基于经验观察提出了两种简单而有效的任务切换与分布漂移检测机制,这构成了我们算法中更优雅的在线模型更新的关键基础:任务切换检测机制能够复用当前任务可用的最佳模型,而分布漂移检测机制则通过差异化元模型更新来保持分布内任务的知识,并快速学习分布外任务的新知识。特别值得注意的是,我们的在线元模型更新仅基于当前数据,无需像大多数现有方法那样存储历史数据。我们进一步证明,在温和条件下该算法能够实现次线性的任务平均遗憾。在三个不同基准测试上的实证研究清晰表明,本算法相较于相关基线方法具有显著优势。