Spiking Neural Networks (SNNs) are highly energy-efficient due to event-driven, sparse computation, but their training is challenged by spike non-differentiability and trade-offs among performance, efficiency, and biological plausibility. Crucially, mainstream SNNs ignore predictive coding, a core cortical mechanism where the brain predicts inputs and encodes errors for efficient perception. Inspired by this, we propose a self-prediction enhanced spiking neuron method that generates an internal prediction current from its input-output history to modulate membrane potential. This design offers dual advantages, it creates a continuous gradient path that alleviates vanishing gradients and boosts training stability and accuracy, while also aligning with biological principles, which resembles distal dendritic modulation and error-driven synaptic plasticity. Experiments show consistent performance gains across diverse architectures, neuron types, time steps, and tasks demonstrating broad applicability for enhancing SNNs.
翻译:脉冲神经网络(SNNs)凭借事件驱动的稀疏计算特性具有极高的能效,但其训练面临脉冲不可微性以及性能、效率和生物合理性之间权衡的挑战。关键的是,主流SNN模型忽略了预测编码这一大脑核心皮层机制——即大脑通过预测输入并编码误差来实现高效感知。受此启发,我们提出一种自预测增强型脉冲神经元方法,该方法从神经元的输入-输出历史生成内部预测电流以调节膜电位。该设计具有双重优势:一方面构建了连续的梯度路径,缓解梯度消失问题,提升训练稳定性与精度;另一方面符合生物学原理,类似于远端树突调制与误差驱动的突触可塑性机制。实验表明,该方法在不同网络架构、神经元类型、时间步长和任务上均能带来一致的性能提升,证明了其在增强SNN方面具有广泛的适用性。