Expressivity theory, characterizing which graphs a GNN can distinguish, has become the predominant framework for analyzing GNNs, with new models striving for higher expressivity. However, we argue that this focus is misguided: First, higher expressivity is not necessary for most real-world tasks as these tasks rarely require expressivity beyond the basic WL test. Second, expressivity theory's binary characterization and idealized assumptions fail to reflect GNNs' practical capabilities. To overcome these limitations, we propose Message Passing Complexity (MPC): a continuous measure that quantifies the difficulty for a GNN architecture to solve a given task through message passing. MPC captures practical limitations like over-squashing while preserving the theoretical impossibility results from expressivity theory, effectively narrowing the gap between theory and practice. Through extensive validation on fundamental GNN tasks, we show that MPC's theoretical predictions correlate with empirical performance, successfully explaining architectural successes and failures. Thereby, MPC advances beyond expressivity theory to provide a more powerful and nuanced framework for understanding and improving GNN architectures.
翻译:表达能力理论通过刻画图神经网络(GNN)能够区分哪些图,已成为分析GNN的主导框架,新模型也致力于追求更高的表达能力。然而,我们认为这种关注方向存在偏差:首先,对于大多数现实世界任务而言,更高的表达能力并非必需,因为这些任务很少需要超越基本WL测试的表达能力。其次,表达能力理论的二元刻画与理想化假设未能反映GNN的实际能力。为克服这些局限,我们提出了消息传递复杂度(MPC):一种连续度量,用于量化GNN架构通过消息传递解决给定任务的难度。MPC捕捉了诸如过度挤压等实际限制,同时保留了表达能力理论中的理论不可能性结果,从而有效缩小了理论与实践之间的差距。通过对基本GNN任务的大量验证,我们表明MPC的理论预测与实证性能相关,成功解释了架构的成功与失败。因此,MPC超越了表达能力理论,为理解和改进GNN架构提供了一个更强大、更细致的框架。