Spiking neural networks (SNNs) have gained prominence for their potential in neuromorphic computing and energy-efficient artificial intelligence, yet optimizing them remains a formidable challenge for gradient-based methods due to their discrete, spike-based computation. This paper attempts to tackle the challenges by introducing Cosine Annealing Differential Evolution (CADE), designed to modulate the mutation factor (F) and crossover rate (CR) of differential evolution (DE) for the SNN model, i.e., Spiking Element Wise (SEW) ResNet. Extensive empirical evaluations were conducted to analyze CADE. CADE showed a balance in exploring and exploiting the search space, resulting in accelerated convergence and improved accuracy compared to existing gradient-based and DE-based methods. Moreover, an initialization method based on a transfer learning setting was developed, pretraining on a source dataset (i.e., CIFAR-10) and fine-tuning the target dataset (i.e., CIFAR-100), to improve population diversity. It was found to further enhance CADE for SNN. Remarkably, CADE elevates the performance of the highest accuracy SEW model by an additional 0.52 percentage points, underscoring its effectiveness in fine-tuning and enhancing SNNs. These findings emphasize the pivotal role of a scheduler for F and CR adjustment, especially for DE-based SNN. Source Code on Github: https://github.com/Tank-Jiang/CADE4SNN.
翻译:脉冲神经网络(SNNs)因其在神经形态计算和节能人工智能领域的潜力而备受关注,然而由于其基于离散脉冲的计算特性,使用梯度方法对其进行优化仍是一项艰巨挑战。本文尝试通过引入余弦退火差分进化算法(CADE)来应对这些挑战,该算法旨在为SNN模型(即脉冲逐元素(SEW)ResNet)调制差分进化(DE)的变异因子(F)和交叉率(CR)。我们进行了广泛的实证评估以分析CADE。与现有的基于梯度的和基于DE的方法相比,CADE在探索与利用搜索空间之间取得了平衡,从而实现了更快的收敛速度和更高的准确率。此外,本文还开发了一种基于迁移学习设置的初始化方法,即在源数据集(如CIFAR-10)上进行预训练,再在目标数据集(如CIFAR-100)上进行微调,以提高种群多样性。该方法被证明能进一步强化CADE在SNN上的性能。值得注意的是,CADE将最高准确率的SEW模型的性能额外提升了0.52个百分点,这凸显了其在微调与增强SNN方面的有效性。这些发现强调了调度器在调整F和CR中的关键作用,特别是对于基于DE的SNN优化。源代码位于Github:https://github.com/Tank-Jiang/CADE4SNN。