Federated graph learning (FGL) has gained significant attention for enabling heterogeneous clients to process their private graph data locally while interacting with a centralized server, thus maintaining privacy. However, graph data on clients are typically non-IID, posing a challenge for a single model to perform well across all clients. Another major bottleneck of FGL is the high cost of communication. To address these challenges, we propose a communication-efficient personalized federated graph learning algorithm, CEFGL. Our method decomposes the model parameters into low-rank generic and sparse private models. We employ a dual-channel encoder to learn sparse local knowledge in a personalized manner and low-rank global knowledge in a shared manner. Additionally, we perform multiple local stochastic gradient descent iterations between communication phases and integrate efficient compression techniques into the algorithm. The advantage of CEFGL lies in its ability to capture common and individual knowledge more precisely. By utilizing low-rank and sparse parameters along with compression techniques, CEFGL significantly reduces communication complexity. Extensive experiments demonstrate that our method achieves optimal classification accuracy in a variety of heterogeneous environments across sixteen datasets. Specifically, compared to the state-of-the-art method FedStar, the proposed method (with GIN as the base model) improves accuracy by 5.64\% on cross-datasets setting CHEM, reduces communication bits by a factor of 18.58, and reduces the communication time by a factor of 1.65.
翻译:联邦图学习(FGL)因其能使异构客户端在本地处理其私有图数据,同时与中心化服务器交互以保护隐私,而受到广泛关注。然而,客户端上的图数据通常是非独立同分布的,这给单一模型在所有客户端上均表现良好带来了挑战。FGL的另一个主要瓶颈是高昂的通信成本。为应对这些挑战,我们提出了一种通信高效的个性化联邦图学习算法CEFGL。我们的方法将模型参数分解为低秩的通用模型和稀疏的私有模型。我们采用双通道编码器以个性化方式学习稀疏的局部知识,并以共享方式学习低秩的全局知识。此外,我们在通信阶段之间执行多次局部随机梯度下降迭代,并将高效的压缩技术集成到算法中。CEFGL的优势在于能够更精确地捕获共性和个体知识。通过利用低秩和稀疏参数以及压缩技术,CEFGL显著降低了通信复杂度。大量实验表明,我们的方法在十六个数据集上的各种异构环境中均实现了最优的分类准确率。具体而言,与最先进的方法FedStar相比,所提出的方法(以GIN作为基础模型)在跨数据集设置CHEM上将准确率提高了5.64%,通信比特数降低了18.58倍,通信时间减少了1.65倍。