Spiking Neural Networks (SNNs) promise energy-efficient computing through event-driven sparsity, yet all existing approaches sacrifice accuracy by approximating continuous values with discrete spikes. We propose NEXUS, a framework that achieves bit-exact ANN-to-SNN equivalence -- not approximate, but mathematically identical outputs. Our key insight is constructing all arithmetic operations, both linear and nonlinear, from pure IF neuron logic gates that implement IEEE-754 compliant floating-point arithmetic. Through spatial bit encoding (zero encoding error by construction), hierarchical neuromorphic gate circuits (from basic logic gates to complete transformer layers), and surrogate-free STE training (exact identity mapping rather than heuristic approximation), NEXUS produces outputs identical to standard ANNs up to machine precision. Experiments on models up to LLaMA-2 70B demonstrate identical task accuracy (0.00% degradation) with mean ULP error of only 6.19, while achieving 27-168,000$\times$ energy reduction on neuromorphic hardware. Crucially, spatial bit encoding's single-timestep design renders the framework inherently immune to membrane potential leakage (100% accuracy across all decay factors $β\in[0.1,1.0]$), while tolerating synaptic noise up to $σ=0.2$ with >98% gate-level accuracy.
翻译:脉冲神经网络(SNNs)通过事件驱动的稀疏性有望实现高能效计算,但现有方法均需用离散脉冲近似连续值,导致精度损失。本文提出NEXUS框架,实现位精确的ANN-SNN等效转换——其输出并非近似,而是数学意义上完全等同。核心创新在于:通过纯IF神经元逻辑门构建所有算术运算(包括线性和非线性),实现符合IEEE-754标准的浮点运算。借助空间位编码(构造零编码误差)、分层神经形态门电路(从基础逻辑门到完整Transformer层)以及无代理STE训练(精确恒等映射而非启发式近似),NEXUS产生的输出与标准ANN达到机器精度级别的完全一致。在LLaMA-2 70B等模型上的实验表明:任务精度保持完全相同(精度损失0.00%),平均ULP误差仅为6.19,同时在神经形态硬件上实现27-168,000$\times$的能效提升。关键的是,空间位编码的单时间步设计使框架天然免疫膜电位泄漏(在所有衰减因子$β\in[0.1,1.0]$范围内保持100%精度),并能容忍高达$σ=0.2$的突触噪声(门级精度>98%)。