In recent years, significant efforts have been made to refine the design of Graph Neural Network (GNN) layers, aiming to overcome diverse challenges, such as limited expressive power and oversmoothing. Despite their widespread adoption, the incorporation of off-the-shelf normalization layers like BatchNorm or InstanceNorm within a GNN architecture may not effectively capture the unique characteristics of graph-structured data, potentially reducing the expressive power of the overall architecture. Moreover, existing graph-specific normalization layers often struggle to offer substantial and consistent benefits. In this paper, we propose GRANOLA, a novel graph-adaptive normalization layer. Unlike existing normalization layers, GRANOLA normalizes node features by adapting to the specific characteristics of the graph, particularly by generating expressive representations of its neighborhood structure, obtained by leveraging the propagation of Random Node Features (RNF) in the graph. We present theoretical results that support our design choices. Our extensive empirical evaluation of various graph benchmarks underscores the superior performance of GRANOLA over existing normalization techniques. Furthermore, GRANOLA emerges as the top-performing method among all baselines within the same time complexity of Message Passing Neural Networks (MPNNs).
翻译:近年来,研究者们为改进图神经网络(GNN)层的设计付出了巨大努力,旨在克服表达能力有限和过度平滑等挑战。尽管标准化层(如BatchNorm或InstanceNorm)已被广泛采用,但将其直接应用于GNN架构可能无法有效捕捉图结构数据的独特特性,反而可能降低整体架构的表达能力。此外,现有的图专用归一化层往往难以提供显著且一致的性能提升。本文提出GRANOLA——一种新颖的图自适应归一化层。与现有归一化层不同,GRANOLA通过适应图的特定特性来归一化节点特征,特别是通过利用随机节点特征(RNF)在图中传播来生成其邻域结构的表达性表征。我们提供了支持设计选择的理论依据。在各种图基准测试上的广泛实验评估表明,GRANOLA的性能优于现有归一化技术。此外,在与消息传递神经网络(MPNNs)相同的时间复杂度下,GRANOLA在所有基线方法中表现最佳。