Unsupervised Graph Domain Adaptation (UGDA) aims to transfer knowledge from a labelled source graph to an unlabelled target graph in order to address the distribution shifts between graph domains. Previous works have primarily focused on aligning data from the source and target graph in the representation space learned by graph neural networks (GNNs). However, the inherent generalization capability of GNNs has been largely overlooked. Motivated by our empirical analysis, we reevaluate the role of GNNs in graph domain adaptation and uncover the pivotal role of the propagation process in GNNs for adapting to different graph domains. We provide a comprehensive theoretical analysis of UGDA and derive a generalization bound for multi-layer GNNs. By formulating GNN Lipschitz for k-layer GNNs, we show that the target risk bound can be tighter by removing propagation layers in source graph and stacking multiple propagation layers in target graph. Based on the empirical and theoretical analysis mentioned above, we propose a simple yet effective approach called A2GNN for graph domain adaptation. Through extensive experiments on real-world datasets, we demonstrate the effectiveness of our proposed A2GNN framework.
翻译:无监督图域适应(UGDA)旨在将知识从带标签的源图迁移至无标签的目标图,以解决图域间的分布偏移问题。现有工作主要聚焦于在由图神经网络(GNN)学习到的表示空间中对齐源图与目标图的数据。然而,GNN的固有泛化能力在很大程度上被忽视。受实证分析的启发,我们重新评估了GNN在图域适应中的作用,揭示了GNN中传播过程对不同图域适应的关键作用。我们为UGDA提供了全面的理论分析,并推导了多层GNN的泛化上界。通过定义k层GNN的Lipschitz常数,我们证明通过移除源图中的传播层并在目标图中堆叠多个传播层,可使目标风险上界更紧。基于上述实证与理论分析,我们提出了一种简单有效的图域适应方法A2GNN。在真实数据集上的大量实验表明,我们提出的A2GNN框架具有显著有效性。