Uniform expressivity guarantees that a Graph Neural Network (GNN) can express a query without the parameters depending on the size of the input graphs. This property is desirable in applications in order to have number of trainable parameters that is independent of the size of the input graphs. Uniform expressivity of the two variable guarded fragment (GC2) of first order logic is a well-celebrated result for Rectified Linear Unit (ReLU) GNNs [Barcelo & al., 2020]. In this article, we prove that uniform expressivity of GC2 queries is not possible for GNNs with a wide class of Pfaffian activation functions (including the sigmoid and tanh), answering a question formulated by [Grohe, 2021]. We also show that despite these limitations, many of those GNNs can still efficiently express GC2 queries in a way that the number of parameters remains logarithmic on the maximal degree of the input graphs. Furthermore, we demonstrate that a log-log dependency on the degree is achievable for a certain choice of activation function. This shows that uniform expressivity can be successfully relaxed by covering large graphs appearing in practical applications. Our experiments illustrates that our theoretical estimates hold in practice.
翻译:均匀表达能力保证了图神经网络(GNN)能够表达查询,且其参数不依赖于输入图的大小。在实际应用中,这一特性是理想的,因为它使得可训练参数的数量与输入图的规模无关。对于使用线性整流单元(ReLU)的GNN而言,一阶逻辑中双变量守卫片段(GC2)的均匀表达能力是一个备受瞩目的成果 [Barcelo & al., 2020]。本文证明,对于采用一大类普法夫(Pfaffian)激活函数(包括sigmoid和tanh)的GNN,GC2查询的均匀表达能力是无法实现的,这回答了[Grohe, 2021]提出的问题。我们还表明,尽管存在这些限制,许多此类GNN仍能以高效的方式表达GC2查询,使得参数数量保持在输入图最大度数的对数级别。此外,我们证明了对于特定激活函数的选择,可以实现参数数量与度数的对数-对数依赖关系。这表明,通过覆盖实际应用中常见的大规模图,均匀表达能力的要求可以被成功放宽。我们的实验验证了理论估计在实际中的有效性。