Over the last decade, graph neural networks (GNNs) have made significant progress in numerous graph machine learning tasks. In real-world applications, where domain shifts occur and labels are often unavailable for a new target domain, graph domain adaptation (GDA) approaches have been proposed to facilitate knowledge transfer from the source domain to the target domain. Previous efforts in tackling distribution shifts across domains have mainly focused on aligning the node embedding distributions generated by the GNNs in the source and target domains. However, as the core part of GDA approaches, the impact of the underlying GNN architecture has received limited attention. In this work, we explore this orthogonal direction, i.e., how to facilitate GDA with architectural enhancement. In particular, we consider a class of GNNs that are designed explicitly based on optimization problems, namely unfolded GNNs (UGNNs), whose training process can be represented as bi-level optimization. Empirical and theoretical analyses demonstrate that when transferring from the source domain to the target domain, the lower-level objective value generated by the UGNNs significantly increases, resulting in an increase in the upper-level objective as well. Motivated by this observation, we propose a simple yet effective strategy called cascaded propagation (CP), which is guaranteed to decrease the lower-level objective value. The CP strategy is widely applicable to general UGNNs, and we evaluate its efficacy with three representative UGNN architectures. Extensive experiments on five real-world datasets demonstrate that the UGNNs integrated with CP outperform state-of-the-art GDA baselines.
翻译:过去十年中,图神经网络(GNNs)在众多图机器学习任务中取得了显著进展。在实际应用中,当出现领域偏移且新目标领域通常缺乏标签时,图领域自适应(GDA)方法被提出以促进从源领域到目标领域的知识迁移。先前解决跨领域分布偏移的研究主要集中于对齐源领域和目标领域中由GNNs生成的节点嵌入分布。然而,作为GDA方法的核心部分,底层GNN架构的影响尚未得到充分关注。本工作中,我们探索了这一正交方向,即如何通过架构增强促进GDA。具体而言,我们研究一类基于优化问题显式设计的GNNs,即展开图神经网络(UGNNs),其训练过程可表示为双层优化。实证与理论分析表明,当从源领域迁移至目标领域时,UGNNs生成的底层目标函数值显著增加,进而导致上层目标函数值上升。基于这一观察,我们提出了一种简单而有效的策略——级联传播(CP),该策略可保证降低底层目标函数值。CP策略广泛适用于通用UGNNs,我们通过三种代表性UGNN架构评估其有效性。在五个真实数据集上的大量实验表明,集成CP的UGNNs优于当前最先进的GDA基线方法。