In recent years, spiking neural networks (SNNs) have attracted substantial interest due to their potential to replicate the energy-efficient and event-driven processing of biological neurons. Despite this, the application of SNNs in graph representation learning, particularly for non-Euclidean data, remains underexplored, and the influence of spiking dynamics on graph learning is not yet fully understood. This work seeks to address these gaps by examining the unique properties and benefits of spiking dynamics in enhancing graph representation learning. We propose a spike-based graph neural network model that incorporates spiking dynamics, enhanced by a novel spatial-temporal feature normalization (STFN) technique, to improve training efficiency and model stability. Our detailed analysis explores the impact of rate coding and temporal coding on SNN performance, offering new insights into their advantages for deep graph networks and addressing challenges such as the oversmoothing problem. Experimental results demonstrate that our SNN models can achieve competitive performance with state-of-the-art graph neural networks (GNNs) while considerably reducing computational costs, highlighting the potential of SNNs for efficient neuromorphic computing applications in complex graph-based scenarios.
翻译:近年来,脉冲神经网络因其能够模拟生物神经元的高能效和事件驱动特性而受到广泛关注。尽管如此,脉冲神经网络在图表示学习中的应用,特别是针对非欧几里得数据,仍处于探索不足的阶段,且脉冲动力学对图学习的影响尚未被完全理解。本研究旨在通过探究脉冲动力学在增强图表示学习中的独特性质和优势来填补这些空白。我们提出了一种基于脉冲的图神经网络模型,该模型融合了脉冲动力学,并通过一种新颖的时空特征归一化技术得以增强,以提高训练效率和模型稳定性。我们详细分析了速率编码和时间编码对脉冲神经网络性能的影响,为它们在深度图网络中的优势提供了新的见解,并解决了诸如过度平滑问题等挑战。实验结果表明,我们的脉冲神经网络模型能够与最先进的图神经网络取得相竞争的性能,同时显著降低计算成本,这凸显了脉冲神经网络在复杂图基场景中实现高效神经形态计算应用的潜力。