In recent years, Graph Neural Networks (GNNs) have become the de facto tool for learning node and graph representations. Most GNNs typically consist of a sequence of neighborhood aggregation (a.k.a., message-passing) layers, within which the representation of each node is updated based on those of its neighbors. The most expressive message-passing GNNs can be obtained through the use of the sum aggregator and of MLPs for feature transformation, thanks to their universal approximation capabilities. However, the limitations of MLPs recently motivated the introduction of another family of universal approximators, called Kolmogorov-Arnold Networks (KANs) which rely on a different representation theorem. In this work, we compare the performance of KANs against that of MLPs on graph learning tasks. We evaluate two different implementations of KANs using two distinct base families of functions, namely B-splines and radial basis functions. We perform extensive experiments on node classification, graph classification and graph regression datasets. Our results indicate that KANs are on-par with or better than MLPs on all studied tasks, making them viable alternatives, at the cost of some computational complexity. Code is available at https: //github.com/RomanBresson/KAGNN.
翻译:近年来,图神经网络已成为学习节点与图表示的事实标准工具。大多数图神经网络通常由一系列邻域聚合层(也称为消息传递层)构成,其中每个节点的表示基于其邻居节点的表示进行更新。最具表达力的消息传递图神经网络可通过使用求和聚合器以及用于特征变换的多层感知机实现,这得益于它们的通用逼近能力。然而,多层感知机的局限性最近促使了另一类通用逼近器的引入,即基于不同表示定理的 Kolmogorov-Arnold 网络。在本研究中,我们比较了 KAN 与 MLP 在图学习任务上的性能。我们评估了使用两种不同基函数族(即 B 样条和径向基函数)的 KAN 的两种实现方式。我们在节点分类、图分类和图回归数据集上进行了大量实验。结果表明,在所有研究任务中,KAN 的性能与 MLP 相当或更优,使其成为可行的替代方案,但需付出一定的计算复杂度代价。代码可在 https://github.com/RomanBresson/KAGNN 获取。