Forgetting refers to the loss or deterioration of previously acquired knowledge. While existing surveys on forgetting have primarily focused on continual learning, forgetting is a prevalent phenomenon observed in various other research domains within deep learning. Forgetting manifests in research fields such as generative models due to generator shifts, and federated learning due to heterogeneous data distributions across clients. Addressing forgetting encompasses several challenges, including balancing the retention of old task knowledge with fast learning of new task, managing task interference with conflicting goals, and preventing privacy leakage, etc. Moreover, most existing surveys on continual learning implicitly assume that forgetting is always harmful. In contrast, our survey argues that forgetting is a double-edged sword and can be beneficial and desirable in certain cases, such as privacy-preserving scenarios. By exploring forgetting in a broader context, we present a more nuanced understanding of this phenomenon and highlight its potential advantages. Through this comprehensive survey, we aspire to uncover potential solutions by drawing upon ideas and approaches from various fields that have dealt with forgetting. By examining forgetting beyond its conventional boundaries, we hope to encourage the development of novel strategies for mitigating, harnessing, or even embracing forgetting in real applications. A comprehensive list of papers about forgetting in various research fields is available at \url{https://github.com/EnnengYang/Awesome-Forgetting-in-Deep-Learning}.
翻译:遗忘指先前获得知识的丧失或退化。现有关于遗忘的综述主要集中于持续学习领域,然而在深度学习的其他研究领域中,遗忘同样是一种普遍现象。遗忘在生成模型研究中因生成器偏移而显现,在联邦学习中因客户端间的异构数据分布而产生。解决遗忘问题面临多重挑战,包括平衡旧任务知识保留与新任务快速学习、管理目标冲突导致的任务干扰、防止隐私泄露等。此外,现有持续学习综述大多隐含假定遗忘始终具有危害性。与之相反,本综述论证遗忘实为双刃剑,在隐私保护等特定场景下可能产生积极效应。通过将遗忘置于更广阔的研究背景中进行探讨,本文提出对此现象更精细化的理解,并揭示其潜在优势。借助本次系统性综述,我们期望通过整合不同领域应对遗忘问题的思路与方法,发掘潜在解决方案。通过突破遗忘研究的传统边界,我们希望能促进开发新颖策略,在实际应用中实现遗忘的缓解、利用乃至有效驾驭。关于各研究领域遗忘问题的完整文献列表可见于 \url{https://github.com/EnnengYang/Awesome-Forgetting-in-Deep-Learning}。