The integration of Spiking Neural Networks (SNNs) and Graph Neural Networks (GNNs) is gradually attracting attention due to the low power consumption and high efficiency in processing the non-Euclidean data represented by graphs. However, as a common problem, dynamic graph representation learning faces challenges such as high complexity and large memory overheads. Current work often uses SNNs instead of Recurrent Neural Networks (RNNs) by using binary features instead of continuous ones for efficient training, which would overlooks graph structure information and leads to the loss of details during propagation. Additionally, optimizing dynamic spiking models typically requires propagation of information across time steps, which increases memory requirements. To address these challenges, we present a framework named \underline{Dy}namic \underline{S}p\underline{i}king \underline{G}raph \underline{N}eural Networks (\method{}). To mitigate the information loss problem, \method{} propagates early-layer information directly to the last layer for information compensation. To accommodate the memory requirements, we apply the implicit differentiation on the equilibrium state, which does not rely on the exact reverse of the forward computation. While traditional implicit differentiation methods are usually used for static situations, \method{} extends it to the dynamic graph setting. Extensive experiments on three large-scale real-world dynamic graph datasets validate the effectiveness of \method{} on dynamic node classification tasks with lower computational costs.
翻译:脉冲神经网络(SNNs)与图神经网络(GNNs)的融合因其低功耗及高效处理图结构所代表的非欧几里得数据的能力而逐渐受到关注。然而,动态图表示学习作为一个普遍性问题,面临着高复杂度和大内存开销等挑战。现有工作通常通过使用二进制特征替代连续特征,以SNNs取代循环神经网络(RNNs)来实现高效训练,但这往往会忽略图结构信息,导致传播过程中的细节丢失。此外,优化动态脉冲模型通常需要在时间步之间进行信息传播,从而增加了内存需求。为应对这些挑战,我们提出了一个名为动态脉冲图神经网络(DySpikingGNN)的框架。为缓解信息丢失问题,DySpikingGNN将浅层信息直接传播至最后一层以进行信息补偿。为适应内存需求,我们在平衡态上应用了隐式微分方法,该方法不依赖于前向计算的精确逆过程。虽然传统的隐式微分方法通常用于静态场景,但DySpikingGNN将其扩展至动态图设置。在三个大规模真实世界动态图数据集上的大量实验验证了DySpikingGNN在动态节点分类任务上的有效性,且具有更低的计算成本。