Work on continual learning (CL) has thus far largely focused on the problems arising from shifts in the data distribution. However, CL can be decomposed into two sub-problems: (a) shifts in the data distribution, and (b) dealing with the fact that the data is split into chunks and so only a part of the data is available to be trained on at any point in time. In this work, we look at the latter sub-problem, the chunking of data. We show that chunking is an important part of CL, accounting for around half of the performance drop from offline learning in our experiments. Furthermore, our results reveal that current CL algorithms do not address the chunking sub-problem, only performing as well as plain SGD training when there is no shift in the data distribution. Therefore, we show that chunking is both an important and currently unaddressed sub-problem and until it is addressed CL methods will be capped in performance. Additionally, we analyse why performance drops when learning occurs on identically distributed chunks of data, and find that forgetting, which is often seen to be a problem due to distribution shift, still arises and is a significant problem. We also show that performance on the chunking sub-problem can be increased and that this performance transfers to the full CL setting, where there is distribution shift. Hence, we argue that work on chunking can help advance CL in general.
翻译:迄今为止,持续学习(CL)的研究主要聚焦于数据分布偏移引发的问题。然而,持续学习可分解为两个子问题:(a) 数据分布偏移,(b) 数据被分割为若干块,因此在任意时间点仅能获取部分数据进行训练。本研究聚焦于后者——数据分块问题。我们通过实验证明,分块是持续学习的重要组成部分,其导致的性能下降约占离线学习性能差距的一半。进一步研究发现,现有持续学习算法未能有效解决分块子问题——当数据分布无偏移时,这些算法的表现与普通SGD训练相当。因此,分块既是持续学习中至关重要又尚未被充分解决的子问题,在攻克该问题前,持续学习方法的性能将始终存在上限。此外,我们分析了在相同分布的数据块上进行学习时性能下降的原因,发现通常归因于分布偏移的遗忘现象依然存在,且构成显著挑战。实验表明,提升分块子问题的性能可有效迁移至存在分布偏移的完整持续学习场景。由此论证,针对分块问题的研究将有力推动持续学习领域的整体发展。