Spiking Neural Networks (SNNs) are highly regarded for their energy efficiency, inherent activation sparsity, and suitability for real-time processing in edge devices. However, most current SNN methods adopt architectures resembling traditional artificial neural networks (ANNs), leading to suboptimal performance when applied to SNNs. While SNNs excel in energy efficiency, they have been associated with lower accuracy levels than traditional ANNs when utilizing conventional architectures. In response, in this work we present LightSNN, a rapid and efficient Neural Network Architecture Search (NAS) technique specifically tailored for SNNs that autonomously leverages the most suitable architecture, striking a good balance between accuracy and efficiency by enforcing sparsity. Based on the spiking NAS network (SNASNet) framework, a cell-based search space including backward connections is utilized to build our training-free pruning-based NAS mechanism. Our technique assesses diverse spike activation patterns across different data samples using a sparsity-aware Hamming distance fitness evaluation. Thorough experiments are conducted on both static (CIFAR10 and CIFAR100) and neuromorphic datasets (DVS128-Gesture). Our LightSNN model achieves state-of-the-art results on CIFAR10 and CIFAR100, improves performance on DVS128Gesture by 4.49%, and significantly reduces search time, most notably offering a 98x speedup over SNASNet and running 30% faster than the best existing method on DVS128Gesture.
翻译:脉冲神经网络(SNNs)因其高能效性、固有的激活稀疏性以及适用于边缘设备实时处理的特性而备受关注。然而,当前大多数SNN方法采用与传统人工神经网络(ANNs)相似的架构,导致其在SNN应用中性能欠佳。尽管SNNs在能效方面表现卓越,但在使用传统架构时,其准确率通常低于传统ANNs。为此,本研究提出LightSNN——一种专为SNN设计的快速高效神经网络架构搜索(NAS)技术,该技术通过强制稀疏性自主利用最合适的架构,在准确率与效率之间实现了良好平衡。基于脉冲NAS网络(SNASNet)框架,我们采用包含反向连接的单元式搜索空间构建了无需训练的基于剪枝的NAS机制。本技术利用稀疏感知的汉明距离适应度评估方法,对不同数据样本的多样化脉冲激活模式进行评估。我们在静态数据集(CIFAR10与CIFAR100)和神经形态数据集(DVS128-Gesture)上进行了全面实验。LightSNN模型在CIFAR10和CIFAR100上取得了最先进的性能,在DVS128Gesture上将准确率提升了4.49%,同时显著缩短了搜索时间——相比SNASNet实现了98倍加速,在DVS128Gesture上比现有最佳方法运行速度快30%。