Message Passing Neural Networks (MPNNs) have emerged as the {\em de facto} standard in graph representation learning. However, when it comes to link prediction, they often struggle, surpassed by simple heuristics such as Common Neighbor (CN). This discrepancy stems from a fundamental limitation: while MPNNs excel in node-level representation, they stumble with encoding the joint structural features essential to link prediction, like CN. To bridge this gap, we posit that, by harnessing the orthogonality of input vectors, pure message-passing can indeed capture joint structural features. Specifically, we study the proficiency of MPNNs in approximating CN heuristics. Based on our findings, we introduce the Message Passing Link Predictor (MPLP), a novel link prediction model. MPLP taps into quasi-orthogonal vectors to estimate link-level structural features, all while preserving the node-level complexities. Moreover, our approach demonstrates that leveraging message-passing to capture structural features could offset MPNNs' expressiveness limitations at the expense of estimation variance. We conduct experiments on benchmark datasets from various domains, where our method consistently outperforms the baseline methods.
翻译:消息传递神经网络(MPNNs)已成为图表示学习的事实标准。然而,在链接预测任务中,它们往往表现不佳,甚至被简单启发式方法(如共同邻居计数)超越。这种差异源于一个根本性限制:虽然MPNNs擅长节点级表示,却难以编码链接预测所需的关键联合结构特征(如共同邻居)。为弥补这一差距,我们提出通过利用输入向量的正交性,纯消息传递机制确实能够捕捉联合结构特征。具体而言,我们研究了MPNNs在近似共同邻居启发式方法上的能力。基于研究发现,我们提出了消息传递链接预测器(MPLP)——一种新型链接预测模型。MPLP利用准正交向量来估计链接级结构特征,同时完整保留节点级复杂特征。此外,我们的方法证明:利用消息传递捕捉结构特征,能够以估计方差为代价弥补MPNNs的表达能力局限。我们在多领域基准数据集上进行了实验,结果表明该方法始终优于基线模型。