Recently, the use of bio-inspired learning techniques such as Hebbian learning and its closely-related Spike-Timing-Dependent Plasticity (STDP) variant have drawn significant attention for the design of compute-efficient AI systems that can continuously learn on-line at the edge. A key differentiating factor regarding this emerging class of neuromorphic continual learning system lies in the fact that learning must be carried using a data stream received in its natural order, as opposed to conventional gradient-based offline training, where a static training dataset is assumed available a priori and randomly shuffled to make the training set independent and identically distributed (i.i.d). In contrast, the emerging class of neuromorphic continual learning systems covered in this survey must learn to integrate new information on the fly in a non-i.i.d manner, which makes these systems subject to catastrophic forgetting. In order to build the next generation of neuromorphic AI systems that can continuously learn at the edge, a growing number of research groups are studying the use of Sparse and Predictive Coding-based Hebbian neural network architectures and the related Spiking Neural Networks (SNNs) equipped with STDP learning. However, since this research field is still emerging, there is a need for providing a holistic view of the different approaches proposed in the literature so far. To this end, this survey covers a number of recent works in the field of neuromorphic continual learning based on state-of-the-art Sparse and Predictive Coding technology; provides background theory to help interested researchers quickly learn the key concepts; and discusses important future research questions in light of the different works covered in this paper. It is hoped that this survey will contribute towards future research in the field of neuromorphic continual learning.
翻译:近年来,赫布学习及其密切相关的脉冲时序依赖可塑性变体等仿生学习技术,在设计能够持续在线学习的边缘计算高效人工智能系统方面引起了广泛关注。这类新兴神经形态持续学习系统的关键区别在于:学习必须通过按自然顺序接收的数据流进行,这与传统的基于梯度的离线训练形成对比——后者假设静态训练数据集先验可用,并通过随机打乱使训练数据满足独立同分布假设。相反,本综述所涵盖的新兴神经形态持续学习系统必须以非独立同分布的方式动态整合新信息,这使得系统容易遭受灾难性遗忘。为构建能够在边缘持续学习的新一代神经形态人工智能系统,越来越多的研究团队正在探索基于稀疏与预测编码的赫布神经网络架构,以及配备脉冲时序依赖可塑性学习机制的脉冲神经网络。然而,由于该研究领域仍处于兴起阶段,有必要对现有文献中提出的不同方法提供整体性视角。为此,本综述涵盖了基于前沿稀疏与预测编码技术的神经形态持续学习领域的最新研究成果;提供了背景理论以帮助研究者快速掌握核心概念;并结合本文所评述的不同工作,讨论了未来重要的研究方向。期望本综述能为神经形态持续学习领域的未来研究作出贡献。