Humans learn multiple tasks in succession with minimal mutual interference, through the context gating mechanism in the prefrontal cortex (PFC). The brain-inspired models of spiking neural networks (SNN) have drawn massive attention for their energy efficiency and biological plausibility. To overcome catastrophic forgetting when learning multiple tasks in sequence, current SNN models for lifelong learning focus on memory reserving or regularization-based modification, while lacking SNN to replicate human experimental behavior. Inspired by biological context-dependent gating mechanisms found in PFC, we propose SNN with context gating trained by the local plasticity rule (CG-SNN) for lifelong learning. The iterative training between global and local plasticity for task units is designed to strengthen the connections between task neurons and hidden neurons and preserve the multi-task relevant information. The experiments show that the proposed model is effective in maintaining the past learning experience and has better task-selectivity than other methods during lifelong learning. Our results provide new insights that the CG-SNN model can extend context gating with good scalability on different SNN architectures with different spike-firing mechanisms. Thus, our models have good potential for parallel implementation on neuromorphic hardware and model human's behavior.
翻译:人类通过前额叶皮层(PFC)中的上下文门控机制,能够以最小的相互干扰连续学习多个任务。受大脑启发的脉冲神经网络(SNN)模型因其高能效和生物合理性而受到广泛关注。为克服顺序学习多个任务时的灾难性遗忘问题,当前用于终身学习的SNN模型主要聚焦于记忆保留或基于正则化的修改,但缺乏能够复现人类实验行为的SNN模型。受PFC中生物上下文依赖门控机制的启发,我们提出采用局部可塑性规则训练的上下文门控SNN(CG-SNN)用于终身学习。通过任务单元在全局与局部可塑性之间的迭代训练,旨在强化任务神经元与隐藏神经元间的连接,并保留多任务相关信息。实验表明,所提模型能有效维持过往学习经验,且在终身学习过程中表现出优于其他方法的任务选择性。我们的研究结果提供了新的见解:CG-SNN模型能够扩展上下文门控机制,并在具有不同脉冲发放机制的各种SNN架构上展现出良好的可扩展性。因此,该模型在神经形态硬件上的并行实现及模拟人类行为方面具有良好潜力。