The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce (port-)Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
翻译:图内信息扩散的动力学是一个关键且尚未解决的议题,它深刻影响着图表示学习,尤其是在考虑长程传播时。这要求采用原则性的方法来控制和调节信息在整个神经流中的传播与耗散程度。受此启发,我们提出了(端口-)哈密顿深度图网络,这是一个新颖的框架,它基于哈密顿动力系统的守恒定律来建模图中的神经信息流。我们在一个统一的理论与实践框架下,调和了非耗散的长程传播与非保守行为,并引入了来自力学系统的工具来衡量这两个组成部分之间的平衡。我们的方法可应用于通用的消息传递架构,并为信息在时间上的守恒性提供了理论保证。实证结果证明了我们的端口-哈密顿方案在推动简单图卷积架构于长程基准测试中达到最先进性能方面的有效性。