Spiking neural networks (SNNs) support energy-efficient machine intelligence because event-driven computation and sparse activity map naturally to low-power digital hardware. In practical implementations, however, membrane states, synaptic weights, and thresholds are represented with finite-precision integer arithmetic. Quantization, clipping, and overflow can therefore alter network dynamics, not just approximate a higher-precision model. This paper adopts an integer-state dynamical perspective, modeling a hardware-oriented SNN as a deterministic map on a bounded integer lattice. Under this view, recurrence, periodic orbits, and regime changes become intrinsic properties of the system. We introduce a lightweight update rule with integer-valued states and shift-based leakage, and demonstrate the approach through exploratory simulations with network sizes N = 30-130, connection densities 0.1-0.9, and bit widths 4/8/16 over T = 1000 steps. The results show bounded and recurrent temporal structure with strong quantization sensitivity. The observed regimes depend heavily on representation semantics and scaling choices. These findings suggest that numerical precision acts as a dynamical design variable and highlight integer-state analysis as a useful framework for hardware-aware SNN co-design, motivating future work on attractor analysis, precision-aware training, and FPGA/ASIC validation.
翻译:脉冲神经网络(SNNs)支持节能的机器智能,因为事件驱动计算和稀疏活动自然地适配于低功耗数字硬件。然而,在实际实现中,膜电位、突触权重和阈值采用有限精度整数算术表示。量化、截断和溢出因此改变网络动力学行为,而不仅仅是对高精度模型的近似。本文采用整数状态动力学视角,将面向硬件的SNN建模为有界整数格点上的确定性映射。在此视角下,循环、周期轨道和状态跃迁成为系统的内在属性。我们提出了一种具有整数状态和移位衰减的轻量级更新规则,并通过网络规模N=30-130、连接密度0.1-0.9、位宽4/8/16在T=1000步上的探索性模拟来展示该方法。结果表明,系统展现出对量化高度敏感的有界和循环时间结构。观察到的状态区间严重依赖于表示语义和缩放选择。这些发现表明,数值精度是一个动力学设计变量,并凸显了整数状态分析作为硬件感知SNN协同设计的有效框架,从而为吸引子分析、精度感知训练以及FPGA/ASIC验证的未来工作提供了动机。