Continual learning (CL) refers to the ability to continuously learn and accumulate new knowledge while retaining useful information from past experiences. Although numerous CL methods have been proposed in recent years, it is not straightforward to deploy them directly to real-world decision-making problems due to their computational cost and lack of uncertainty quantification. To address these issues, we propose CL-BRUNO, a probabilistic, Neural Process-based CL model that performs scalable and tractable Bayesian update and prediction. Our proposed approach uses deep-generative models to create a unified probabilistic framework capable of handling different types of CL problems such as task- and class-incremental learning, allowing users to integrate information across different CL scenarios using a single model. Our approach is able to prevent catastrophic forgetting through distributional and functional regularisation without the need of retaining any previously seen samples, making it appealing to applications where data privacy or storage capacity is of concern. Experiments show that CL-BRUNO outperforms existing methods on both natural image and biomedical data sets, confirming its effectiveness in real-world applications.
翻译:持续学习(Continual Learning, CL)指的是在不断学习新知识的同时,能够保留过去经验中有用信息的能力。尽管近年来已提出众多持续学习方法,但由于其计算成本高昂且缺乏不确定性量化,直接将其部署到现实世界的决策问题中并不容易。为解决这些问题,我们提出了CL-BRUNO——一种基于神经过程的概率化持续学习模型,能够执行可扩展且易于处理的贝叶斯更新与预测。我们提出的方法利用深度生成模型构建了一个统一的概率框架,能够处理任务增量学习与类别增量学习等不同类型的持续学习问题,允许用户使用单一模型整合不同持续学习场景中的信息。该方法通过分布正则化与函数正则化防止灾难性遗忘,而无需保留任何先前见过的样本,这对于关注数据隐私或存储容量的应用场景具有吸引力。实验表明,CL-BRUNO在自然图像与生物医学数据集上均优于现有方法,证实了其在现实应用中的有效性。