Federated graph learning (FGL) enables collaborative training of graph neural networks (GNNs) across decentralized subgraphs without exposing raw data. While existing FGL methods often achieve high overall accuracy, we show that this average performance can conceal severe degradation on disadvantaged node groups. From a fairness perspective, these disparities arise systematically from three coupled sources: label skew toward majority patterns, topology confounding in message propagation, and aggregation dilution of updates from hard clients. To address this, we propose \textbf{BoostFGL}, a boosting-style framework for fairness-aware FGL. BoostFGL introduces three coordinated mechanisms: \ding{182} \emph{Client-side node boosting}, which reshapes local training signals to emphasize systematically under-served nodes; \ding{183} \emph{Client-side topology boosting}, which reallocates propagation emphasis toward reliable yet underused structures and attenuates misleading neighborhoods; and \ding{184} \emph{Server-side model boosting}, which performs difficulty- and reliability-aware aggregation to preserve informative updates from hard clients while stabilizing the global model. Extensive experiments on 9 datasets show that BoostFGL delivers substantial fairness gains, improving Overall-F1 by 8.43\%, while preserving competitive overall performance against strong FGL baselines.
翻译:联邦图学习(FGL)使得图神经网络(GNNs)能够在去中心化的子图上进行协同训练,而无需暴露原始数据。尽管现有的FGL方法通常能实现较高的整体准确率,但我们证明,这种平均性能可能掩盖了在弱势节点组上的严重性能退化。从公平性视角来看,这些差异系统地源于三个相互耦合的源头:标签向多数模式倾斜、消息传播中的拓扑混淆,以及来自困难客户端更新的聚合稀释。为解决此问题,我们提出了 \textbf{BoostFGL},一个用于公平感知FGL的类提升框架。BoostFGL引入了三个协调机制:① \emph{客户端节点提升},通过重塑本地训练信号以强调系统性服务不足的节点;② \emph{客户端拓扑提升},将传播重点重新分配给可靠但未充分利用的结构,并削弱误导性的邻域;以及③ \emph{服务端模型提升},执行基于难度和可靠性的聚合,以保留来自困难客户端的信息性更新,同时稳定全局模型。在9个数据集上的大量实验表明,BoostFGL带来了显著的公平性收益,将Overall-F1提升了8.43\%,同时在整体性能上与强大的FGL基线方法保持竞争力。