Graph neural networks (GNNs) have emerged as a powerful framework for a wide range of node-level graph learning tasks. However, their performance typically depends on random or minimally informed initial feature representations, where poor initialization can lead to slower convergence and increased training instability. In this paper, we address this limitation by leveraging a statistically grounded one-hot graph encoder embedding (GEE) as a high-quality, structure-aware initialization for node features. Integrating GEE into standard GNNs yields the GEE-powered GNN (GG) framework. Across extensive simulations and real-world benchmarks, GG provides consistent and substantial performance gains in both unsupervised and supervised settings. For node classification, we further introduce GG-C, which concatenates the outputs of GG and GEE and outperforms competing methods, achieving roughly 10-50% accuracy improvements across most datasets. These results demonstrate the importance of principled, structure-aware initialization for improving the efficiency, stability, and overall performance of graph neural network architecture, enabling models to better exploit graph topology from the outset.
翻译:图神经网络已成为处理各类节点级图学习任务的有力框架。然而,其性能通常依赖于随机或信息量极少的初始特征表示,不当的初始化会导致收敛速度减慢和训练不稳定性增加。本文通过采用基于统计理论的一热图编码器嵌入作为高质量、结构感知的节点特征初始化方法,以克服这一局限。将GEE整合到标准图神经网络中,形成了GEE驱动的图神经网络框架。在大量仿真和实际基准测试中,GG在无监督和有监督设置下均实现了持续且显著的性能提升。针对节点分类任务,我们进一步提出了GG-C模型,该模型通过拼接GG与GEE的输出结果,在多数数据集上实现了约10-50%的准确率提升,性能优于现有竞争方法。这些结果表明,采用基于原理的结构感知初始化策略对于提升图神经网络的效率、稳定性及整体性能具有重要作用,使模型能够从初始阶段更有效地利用图拓扑结构。