Message Passing Neural Networks (MPNNs) have emerged as the {\em de facto} standard in graph representation learning. However, when it comes to link prediction, they often struggle, surpassed by simple heuristics such as Common Neighbor (CN). This discrepancy stems from a fundamental limitation: while MPNNs excel in node-level representation, they stumble with encoding the joint structural features essential to link prediction, like CN. To bridge this gap, we posit that, by harnessing the orthogonality of input vectors, pure message-passing can indeed capture joint structural features. Specifically, we study the proficiency of MPNNs in approximating CN heuristics. Based on our findings, we introduce the Message Passing Link Predictor (MPLP), a novel link prediction model. MPLP taps into quasi-orthogonal vectors to estimate link-level structural features, all while preserving the node-level complexities. Moreover, our approach demonstrates that leveraging message-passing to capture structural features could offset MPNNs' expressiveness limitations at the expense of estimation variance. We conduct experiments on benchmark datasets from various domains, where our method consistently outperforms the baseline methods.
翻译:消息传递神经网络(MPNNs)已成为图表示学习的事实标准。然而,在链路预测任务中,它们常表现不佳,被共同邻居(CN)等简单启发式方法超越。这种差异源于一个根本性限制:MPNNs在节点级表示上表现出色,但在编码链路预测所必需的联合结构特征(如CN)时存在不足。为弥合这一差距,我们提出,通过利用输入向量的正交性,纯消息传递确实能够捕捉联合结构特征。具体而言,我们研究了MPNNs在逼近CN启发式方法方面的能力。基于研究发现,我们引入了消息传递链路预测器(MPLP),一种新型链路预测模型。MPLP利用准正交向量估计链路级结构特征,同时保留节点级复杂度。此外,我们的方法表明,利用消息传递捕捉结构特征可以在估计方差代价下抵消MPNNs的表达能力限制。我们在多个领域的基准数据集上进行实验,结果证实我们的方法始终优于基线方法。