Recently, brain-inspired spiking neural networks (SNNs) have attracted great research attention owing to their inherent bio-interpretability, event-triggered properties and powerful perception of spatiotemporal information, which is beneficial to handling event-based neuromorphic datasets. In contrast to conventional static image datasets, event-based neuromorphic datasets present heightened complexity in feature extraction due to their distinctive time series and sparsity characteristics, which influences their classification accuracy. To overcome this challenge, a novel approach termed Neuromorphic Momentum Contrast Learning (NeuroMoCo) for SNNs is introduced in this paper by extending the benefits of self-supervised pre-training to SNNs to effectively stimulate their potential. This is the first time that self-supervised learning (SSL) based on momentum contrastive learning is realized in SNNs. In addition, we devise a novel loss function named MixInfoNCE tailored to their temporal characteristics to further increase the classification accuracy of neuromorphic datasets, which is verified through rigorous ablation experiments. Finally, experiments on DVS-CIFAR10, DVS128Gesture and N-Caltech101 have shown that NeuroMoCo of this paper establishes new state-of-the-art (SOTA) benchmarks: 83.6% (Spikformer-2-256), 98.62% (Spikformer-2-256), and 84.4% (SEW-ResNet-18), respectively.
翻译:摘要:近年来,受大脑启发的脉冲神经网络(SNNs)因其固有的生物可解释性、事件触发特性以及对时空信息的强大感知能力而受到广泛研究关注,这有助于处理基于事件的神经形态数据集。与传统静态图像数据集相比,基于事件的神经形态数据集由于其独特的时间序列和稀疏性特征,在特征提取方面呈现更高的复杂性,进而影响其分类精度。为克服这一挑战,本文通过将自监督预训练的优势扩展至SNNs以有效激发其潜能,提出了一种名为神经形态动量对比学习(NeuroMoCo)的新方法。这是首次在SNNs中实现基于动量对比学习的自监督学习(SSL)。此外,我们针对其时间特性设计了一种名为MixInfoNCE的新型损失函数,以进一步提高神经形态数据集的分类精度,并通过严格的消融实验加以验证。最终,在DVS-CIFAR10、DVS128Gesture和N-Caltech101上的实验表明,本文提出的NeuroMoCo分别建立了新的最先进(SOTA)基准:83.6%(Spikformer-2-256)、98.62%(Spikformer-2-256)和84.4%(SEW-ResNet-18)。