Graph Neural Networks (GNNs) often struggle to propagate information across long distances due to oversmoothing and oversquashing. Existing remedies such as graph transformers or rewiring typically incur high computational cost or require altering the graph structure. We introduce a Bakry-Emery graph Laplacian that integrates diffusion and advection through a learnable node-wise potential, inducing task-dependent propagation dynamics without modifying topology. This operator has a well-behaved spectral decomposition and acts as a drop-in replacement for standard Laplacians in spectral GNNs. Building on this insight, we develop mu-ChebNet, a spectral architecture that jointly learns the potential and Chebyshev filters, effectively bridging message-passing adaptivity and spectral efficiency. Our theoretical analysis shows how the potential modulates the spectrum, enabling control of key graph properties. Empirically, mu-ChebNet delivers consistent gains on synthetic long-range reasoning tasks, as well as real-world benchmarks, while offering an interpretable routing field that reveals how information flows through the graph. This establishes the Bakry-Emery Laplacian as a principled and efficient foundation for adaptive spectral graph learning.
翻译:图神经网络(GNNs)常因过度平滑和过度挤压问题,难以在长距离上有效传播信息。现有的解决方案,如图Transformer或图重布线,通常计算成本高昂或需要改变图结构。我们引入了一种Bakry-Emery图拉普拉斯算子,它通过一个可学习的节点势能,将扩散与平流过程相融合,从而在不改变拓扑结构的情况下,诱导出任务依赖的传播动力学。该算子具有性质良好的谱分解,可作为谱图神经网络中标准拉普拉斯算子的即插即用替代品。基于这一洞见,我们开发了mu-ChebNet,这是一种谱架构,能联合学习势能与切比雪夫滤波器,有效桥接了消息传递的自适应性与谱效率。我们的理论分析展示了势能如何调制谱结构,从而实现对关键图性质的控制。实验表明,mu-ChebNet在合成长距离推理任务及现实世界基准测试中均取得稳定性能提升,同时提供了一个可解释的路由场,揭示了信息在图中如何流动。这确立了Bakry-Emery拉普拉斯算子作为自适应谱图学习的一个原理清晰且高效的基础。