Networks of interconnected neurons communicating through spiking signals offer the bedrock of neural computations. Our brains spiking neural networks have the computational capacity to achieve complex pattern recognition and cognitive functions effortlessly. However, solving real-world problems with artificial spiking neural networks (SNNs) has proved to be difficult for a variety of reasons. Crucially, scaling SNNs to large networks and processing large-scale real-world datasets have been challenging, especially when compared to their non-spiking deep learning counterparts. The critical operation that is needed of SNNs is the ability to learn distributed representations from data and use these representations for perceptual, cognitive and memory operations. In this work, we introduce a novel SNN that performs unsupervised representation learning and associative memory operations leveraging Hebbian synaptic and activity-dependent structural plasticity coupled with neuron-units modelled as Poisson spike generators with sparse firing (~1 Hz mean and ~100 Hz maximum firing rate). Crucially, the architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories. We evaluated the model on properties relevant for attractor-based associative memories such as pattern completion, perceptual rivalry, distortion resistance, and prototype extraction.
翻译:通过脉冲信号进行通信的互连神经元网络构成了神经计算的基础。我们大脑中的脉冲神经网络具备实现复杂模式识别和认知功能的计算能力。然而,由于多种原因,利用人工脉冲神经网络(SNNs)解决现实世界问题已被证明是困难的。关键在于,将SNNs扩展至大型网络并处理大规模现实世界数据集一直具有挑战性,特别是与其非脉冲的深度学习对应物相比。SNNs所需的关键操作是从数据中学习分布式表征,并利用这些表征进行感知、认知和记忆操作。在本工作中,我们提出了一种新型SNN,它利用赫布突触可塑性和活动依赖的结构可塑性,结合被建模为具有稀疏发放(平均约1 Hz,最高约100 Hz发放率)的泊松脉冲发生器的神经元单元,执行无监督表征学习和联想记忆操作。重要的是,我们模型的架构源自新皮层柱状组织,并结合了用于学习隐藏表征的前馈投射以及用于形成联想记忆的循环投射。我们在基于吸引子的联想记忆相关特性上评估了该模型,例如模式补全、知觉竞争、抗失真和原型提取。