Recent years have seen considerable progress in the continual training of deep neural networks, predominantly thanks to approaches that add replay or regularization terms to the loss function to approximate the joint loss over all tasks so far. However, we show that even with a perfect approximation to the joint loss, these approaches still suffer from temporary but substantial forgetting when starting to train on a new task. Motivated by this 'stability gap', we propose that continual learning strategies should focus not only on the optimization objective, but also on the way this objective is optimized. While there is some continual learning work that alters the optimization trajectory (e.g., using gradient projection techniques), this line of research is positioned as alternative to improving the optimization objective, while we argue it should be complementary. In search of empirical support for our proposition, we perform a series of pre-registered experiments combining replay-approximated joint objectives with gradient projection-based optimization routines. However, this first experimental attempt fails to show clear and consistent benefits. Nevertheless, our conceptual arguments, as well as some of our empirical results, demonstrate the distinctive importance of the optimization trajectory in continual learning, thereby opening up a new direction for continual learning research.
翻译:近年来,深度神经网络的持续训练取得了显著进展,这主要归功于通过在损失函数中添加回放或正则化项来近似迄今为止所有任务联合损失的方法。然而,我们发现即使对联合损失实现了完美近似,这些方法在开始训练新任务时仍然会遭受暂时但显著的遗忘。受这种"稳定性差距"的启发,我们提出持续学习策略不仅应关注优化目标,还应关注该目标的优化方式。尽管已有部分持续学习研究通过改变优化轨迹(例如使用梯度投影技术)来改进方法,但这类研究通常被视为改进优化目标的替代方案,而我们主张二者应是互补的。为验证这一主张,我们进行了一系列预注册实验,将基于回放的联合目标近似与基于梯度投影的优化流程相结合。然而,首次实验尝试未能展现出明确且一致的收益。尽管如此,我们的概念论证以及部分实证结果证明了优化轨迹在持续学习中的独特重要性,从而为持续学习研究开辟了新的方向。