Spiking neural networks (SNNs) have emerged as a promising candidate for energy-efficient LLM inference. However, current energy evaluations for SNNs primarily focus on counting accumulate operations, and fail to account for real-world hardware costs such as data movement, which can consume nearly 80% of the total energy. In this paper, we propose Matterhorn, a spiking transformer that integrates a novel masked time-to-first-spike (M-TTFS) encoding method to reduce spike movement and a memristive synapse unit (MSU) to eliminate weight access overhead. M-TTFS employs a masking strategy that reassigns the zero-energy silent state (a spike train of all 0s) to the most frequent membrane potential rather than the lowest. This aligns the coding scheme with the data distribution, minimizing spike movement energy without information loss. We further propose a `dead zone' strategy that maximizes sparsity by mapping all values within a given range to the silent state. At the hardware level, the MSU utilizes compute-in-memory (CIM) technology to perform analog integration directly within memory, effectively removing weight access costs. On the GLUE benchmark, Matterhorn establishes a new state-of-the-art, surpassing existing SNNs by 1.42% in average accuracy while delivering a 2.31 times improvement in energy efficiency.
翻译:脉冲神经网络(SNNs)已成为实现能效型大语言模型推理的有力候选方案。然而,当前针对SNNs的能耗评估主要集中于累加操作计数,未能充分考虑实际硬件成本(如数据移动,其能耗可占总能耗的近80%)。本文提出Matterhorn,一种集成了新型掩码首次脉冲时间(M-TTFS)编码方法以减少脉冲移动、并采用忆阻突触单元(MSU)以消除权重访问开销的脉冲Transformer。M-TTFS采用一种掩码策略,将零能耗的静默状态(全零脉冲序列)重新分配给出现频率最高的膜电位,而非最低值。这使得编码方案与数据分布相匹配,在无信息损失的前提下最小化脉冲移动能耗。我们进一步提出一种"静默区"策略,通过将给定范围内的所有值映射至静默状态,以最大化稀疏性。在硬件层面,MSU利用存内计算(CIM)技术在存储器内直接执行模拟积分,有效消除了权重访问成本。在GLUE基准测试中,Matterhorn确立了新的性能标杆,其平均准确率超越现有SNNs 1.42%,同时能效提升2.31倍。