Uniform expressivity guarantees that a Graph Neural Network (GNN) can express a query without the parameters depending on the size of the input graphs. This property is desirable in applications in order to have number of trainable parameters that is independent of the size of the input graphs. Uniform expressivity of the two variable guarded fragment (GC2) of first order logic is a well-celebrated result for Rectified Linear Unit (ReLU) GNNs [Barcelo & al., 2020]. In this article, we prove that uniform expressivity of GC2 queries is not possible for GNNs with a wide class of Pfaffian activation functions (including the sigmoid and tanh), answering a question formulated by [Grohe, 2021]. We also show that despite these limitations, many of those GNNs can still efficiently express GC2 queries in a way that the number of parameters remains logarithmic on the maximal degree of the input graphs. Furthermore, we demonstrate that a log-log dependency on the degree is achievable for a certain choice of activation function. This shows that uniform expressivity can be successfully relaxed by covering large graphs appearing in practical applications. Our experiments illustrates that our theoretical estimates hold in practice.
翻译:均匀表达能力保证图神经网络(GNN)能够表达查询,而无需参数依赖于输入图的大小。在实际应用中,这一性质是理想的,因为它使得可训练参数的数量与输入图的大小无关。对于使用修正线性单元(ReLU)的GNN,其能够均匀表达一阶逻辑中二变量受保护片段(GC2)查询是一个广受赞誉的结果[Barcelo等人,2020]。在本文中,我们证明了对于使用一大类普法夫激活函数(包括sigmoid和tanh)的GNN,GC2查询的均匀表达能力是不可能的,这回答了[Grohe,2021]提出的一个问题。我们还表明,尽管存在这些限制,许多此类GNN仍然能够高效地表达GC2查询,其参数数量相对于输入图的最大度保持对数依赖关系。此外,我们证明了对于特定激活函数的选择,可以实现对度的对数-对数依赖关系。这表明,通过覆盖实际应用中常见的大规模图,均匀表达能力可以被成功放宽。我们的实验表明,我们的理论估计在实践中是成立的。