Spiking Neural Networks (SNNs) promise energy-efficient computing through event-driven sparsity, yet all existing approaches sacrifice accuracy by approximating continuous values with discrete spikes. We propose NEXUS, a framework that achieves bit-exact ANN-to-SNN equivalence -- not approximate, but mathematically identical outputs. Our key insight is constructing all arithmetic operations, both linear and nonlinear, from pure IF neuron logic gates that implement IEEE-754 compliant floating-point arithmetic. Through spatial bit encoding (zero encoding error by construction), hierarchical neuromorphic gate circuits (from basic logic gates to complete transformer layers), and surrogate-free STE training (exact identity mapping rather than heuristic approximation), NEXUS produces outputs identical to standard ANNs up to machine precision. Experiments on models up to LLaMA-2 70B demonstrate identical task accuracy (0.00\% degradation) with mean ULP error of only 6.19, while achieving 27-168,000$\times$ energy reduction on neuromorphic hardware. Crucially, spatial bit encoding's single-timestep design renders the framework inherently immune to membrane potential leakage (100\% accuracy across all decay factors $β\in[0.1,1.0]$), while tolerating synaptic noise up to $σ=0.2$ with >98\% gate-level accuracy.
翻译:脉冲神经网络(SNNs)通过事件驱动的稀疏性有望实现高能效计算,然而现有方法均需通过离散脉冲近似连续值,导致精度损失。我们提出NEXUS框架,实现了比特精确的ANN-to-SNN等价转换——其输出并非近似,而是数学意义上完全等同。核心创新在于构建基于纯IF神经元逻辑门的算术运算体系(涵盖线性与非线性操作),该体系可实现符合IEEE-754标准的浮点运算。通过空间比特编码(结构设计实现零编码误差)、层次化神经形态门电路(从基础逻辑门到完整Transformer层)以及无代理STE训练(采用精确恒等映射而非启发式近似),NEXUS产生的输出与标准人工神经网络达到机器精度级别的完全一致。在LLaMA-2 70B等模型上的实验表明:任务精度完全一致(精度损失0.00%),平均ULP误差仅为6.19,同时在神经形态硬件上实现27-168,000倍的能耗降低。关键的是,空间比特编码的单时间步设计使框架天然免疫膜电位泄漏问题(在所有衰减因子$β\in[0.1,1.0]$范围内保持100%精度),且能耐受高达$σ=0.2$的突触噪声(门级精度>98%)。