Spiking Neural Networks (SNNs) can offer ultra-low power/energy consumption for machine learning-based application tasks due to their sparse spike-based operations. Currently, most of the SNN architectures need a significantly larger model size to achieve higher accuracy, which is not suitable for resource-constrained embedded applications. Therefore, developing SNNs that can achieve high accuracy with acceptable memory footprint is highly needed. Toward this, we propose SpiKernel, a novel methodology that improves the accuracy of SNNs through kernel size exploration. Its key steps include (1) investigating the impact of different kernel sizes on the accuracy, (2) devising new sets of kernel sizes, (3) generating SNN architectures using neural architecture search based on the selected kernel sizes, and (4) analyzing the accuracy-memory trade-offs for SNN model selection. The experimental results show that our SpiKernel achieves higher accuracy than state-of-the-art works (i.e., 93.24% for CIFAR10, 70.84% for CIFAR100, and 62% for TinyImageNet) with less than 10M parameters and up to 4.8x speed-up of searching time, thereby making it suitable for embedded applications.
翻译:脉冲神经网络(SNNs)因其基于稀疏脉冲的操作特性,能够为基于机器学习的应用任务提供超低功耗/能耗。目前,大多数SNN架构需要显著更大的模型尺寸才能实现更高的精度,这不适用于资源受限的嵌入式应用。因此,迫切需要开发能够在可接受内存占用下实现高精度的SNN。为此,我们提出SpiKernel,一种通过卷积核尺寸探索来提升SNN精度的新方法。其关键步骤包括:(1)研究不同卷积核尺寸对精度的影响,(2)设计新的卷积核尺寸组合,(3)基于选定的卷积核尺寸通过神经架构搜索生成SNN架构,以及(4)分析精度与内存的权衡以进行SNN模型选择。实验结果表明,我们的SpiKernel以少于1000万参数和最高达4.8倍的搜索加速,实现了比现有先进工作更高的精度(即在CIFAR10上为93.24%,在CIFAR100上为70.84%,在TinyImageNet上为62%),从而使其适用于嵌入式应用。