Federated graph learning collaboratively learns a global graph neural network with distributed graphs, where the non-independent and identically distributed property is one of the major challenges. Most relative arts focus on traditional distributed tasks like images and voices, incapable of graph structures. This paper firstly reveals that local client distortion is brought by both node-level semantics and graph-level structure. First, for node-level semantics, we find that contrasting nodes from distinct classes is beneficial to provide a well-performing discrimination. We pull the local node towards the global node of the same class and push it away from the global node of different classes. Second, we postulate that a well-structural graph neural network possesses similarity for neighbors due to the inherent adjacency relationships. However, aligning each node with adjacent nodes hinders discrimination due to the potential class inconsistency. We transform the adjacency relationships into the similarity distribution and leverage the global model to distill the relation knowledge into the local model, which preserves the structural information and discriminability of the local model. Empirical results on three graph datasets manifest the superiority of the proposed method over its counterparts.
翻译:联邦图学习通过分布式图协同训练全局图神经网络,其中非独立同分布特性是主要挑战之一。现有方法多集中于图像、语音等传统分布式任务,难以处理图结构数据。本文首次揭示局部客户端失真同时源于节点级语义与图级结构两方面因素。首先,在节点级语义层面,我们发现对比不同类别的节点有助于形成有效的判别能力。具体而言,我们将局部节点向同类全局节点拉近,并与异类全局节点推远。其次,我们假设结构良好的图神经网络因固有的邻接关系而保持邻居节点的相似性。然而,由于潜在的类别不一致性,将每个节点与相邻节点直接对齐会阻碍判别能力。为此,我们将邻接关系转化为相似度分布,并利用全局模型将关系知识蒸馏至局部模型,从而同时保留局部模型的结构信息与判别能力。在三个图数据集上的实验结果表明,所提方法优于现有对比方法。