As the size of large language models continue to scale, so does the computational resources required to run it. Spiking Neural Networks (SNNs) have emerged as an energy-efficient approach to deep learning that leverage sparse and event-driven activations to reduce the computational overhead associated with model inference. While they have become competitive with non-spiking models on many computer vision tasks, SNNs have also proven to be more challenging to train. As a result, their performance lags behind modern deep learning, and we are yet to see the effectiveness of SNNs in language generation. In this paper, inspired by the Receptance Weighted Key Value (RWKV) language model, we successfully implement `SpikeGPT', a generative language model with binary, event-driven spiking activation units. We train the proposed model on two model variants: 45M and 216M parameters. To the best of our knowledge, SpikeGPT is the largest backpropagation-trained SNN model to date, rendering it suitable for both the generation and comprehension of natural language. We achieve this by modifying the transformer block to replace multi-head self attention to reduce quadratic computational complexity O(N^2) to linear complexity O(N) with increasing sequence length. Input tokens are instead streamed in sequentially to our attention mechanism (as with typical SNNs). Our preliminary experiments show that SpikeGPT remains competitive with non-spiking models on tested benchmarks, while maintaining 20x fewer operations when processed on neuromorphic hardware that can leverage sparse, event-driven activations. Our code implementation is available at https://github.com/ridgerchu/SpikeGPT.
翻译:随着大型语言模型规模的持续扩大,其运行所需的计算资源也相应增加。脉冲神经网络(SNNs)作为一种高能效的深度学习方法,利用稀疏且事件驱动的激活机制来降低模型推理相关的计算开销。尽管在许多计算机视觉任务中,SNNs已与非脉冲模型具备竞争力,但其训练过程被证明更具挑战性。因此,SNNs的性能仍落后于现代深度学习技术,且我们尚未见证SNNs在语言生成任务中的有效性。本文受Receptance Weighted Key Value(RWKV)语言模型的启发,成功实现了`SpikeGPT`——一种采用二元事件驱动脉冲激活单元的生成式语言模型。我们在两种参数规模(45M和216M)的模型变体上进行了训练。据我们所知,SpikeGPT是迄今为止最大的基于反向传播训练的SNN模型,使其能够同时适用于自然语言的生成与理解任务。我们通过改进Transformer模块,将多头自注意力机制替换为随序列长度增长可将二次计算复杂度O(N^2)降至线性复杂度O(N)的机制。输入标记以序列化方式流式输入至我们的注意力机制(与典型SNNs处理方式一致)。初步实验表明,SpikeGPT在测试基准上与非脉冲模型保持竞争力,同时在能够利用稀疏事件驱动激活的神经形态硬件上运行时,其操作量减少20倍。我们的代码实现已发布于https://github.com/ridgerchu/SpikeGPT。