Subgraph federated learning (SFL) is a research methodology that has gained significant attention for its potential to handle distributed graph-structured data. In SFL, the local model comprises graph neural networks (GNNs) with a partial graph structure. However, some SFL models have overlooked the significance of missing cross-subgraph edges, which can lead to local GNNs being unable to message-pass global representations to other parties' GNNs. Moreover, existing SFL models require substantial labeled data, which limits their practical applications. To overcome these limitations, we present a novel SFL framework called FedMpa that aims to learn cross-subgraph node representations. FedMpa first trains a multilayer perceptron (MLP) model using a small amount of data and then propagates the federated feature to the local structures. To further improve the embedding representation of nodes with local subgraphs, we introduce the FedMpae method, which reconstructs the local graph structure with an innovation view that applies pooling operation to form super-nodes. Our extensive experiments on six graph datasets demonstrate that FedMpa is highly effective in node classification. Furthermore, our ablation experiments verify the effectiveness of FedMpa.
翻译:子图联邦学习(SFL)是一种因其处理分布式图结构数据的潜力而受到广泛关注的研究方法。在SFL中,局部模型由具有部分图结构的图神经网络(GNN)组成。然而,一些SFL模型忽视了缺失跨子图边的重要性,这可能导致局部GNN无法将全局表示通过消息传递发送给其他参与方的GNN。此外,现有的SFL模型需要大量标注数据,这限制了其实际应用。为了克服这些限制,我们提出了一种名为FedMpa的新型SFL框架,旨在学习跨子图的节点表示。FedMpa首先使用少量数据训练一个多层感知机(MLP)模型,然后将联邦特征传播到局部结构中。为了进一步利用局部子图改进节点的嵌入表示,我们引入了FedMpae方法,该方法通过一种创新视角重建局部图结构,即应用池化操作形成超节点。我们在六个图数据集上进行的大量实验表明,FedMpa在节点分类任务上非常有效。此外,我们的消融实验验证了FedMpa的有效性。