Federated Graph Learning (FGL) enables a central server to coordinate model training across distributed clients without local graph data being shared. However, FGL significantly suffers from cross-silo domain shifts, where each "silo" (domain) contains a limited number of clients with distinct graph topologies. These heterogeneities induce divergent optimization trajectories, ultimately leading to global model divergence. In this work, we reveal a severe architectural pathology termed Structural Orthogonality: the topology-dependent message passing mechanism forces gradients from different domains to target disjoint coordinates in the parameter space. Through a controlled comparison between backbones, we statistically prove that GNN updates are near-perpendicular across domains (with projection ratios $\to$ 0). Consequently, naive averaging leads to Consensus Collapse, a phenomenon where sparse, informative structural signals from individual domains are diluted by the near-zero updates of others. This forces the global model into a "sub-optimal" state that fails to represent domain-specific structural patterns, resulting in poor generalization. To address this, we propose FedIA, a lightweight server-side framework designed to reconcile update conflicts without auxiliary communication. FedIA operates in two stages: (1) Global Importance Masking (GIM) identifies a shared parameter subspace to filter out domain-specific structural noise and prevent signal dilution; (2) Confidence-Aware Momentum Weighting (CAM) dynamically re-weights client contributions based on gradient reliability to amplify stable optimization signals.
翻译:联邦图学习(FGL)使得中央服务器能够在分布式客户端之间协调模型训练,而无需共享本地图数据。然而,FGL严重受到跨孤岛领域偏移的影响,其中每个“孤岛”(领域)包含数量有限且具有不同图拓扑结构的客户端。这些异质性导致优化轨迹发散,最终引发全局模型的分化。在本研究中,我们揭示了一种严重的架构病理现象,称为结构正交性:依赖于拓扑结构的消息传递机制迫使来自不同领域的梯度指向参数空间中不相交的坐标。通过对不同骨干网络进行受控比较,我们统计证明了GNN更新在跨领域间近乎正交(投影比趋于0)。因此,简单的平均会导致共识崩溃,即来自各个领域的稀疏但信息丰富的结构信号被其他领域的近零更新所稀释。这迫使全局模型陷入一种“次优”状态,无法表征领域特定的结构模式,从而导致泛化性能下降。为解决此问题,我们提出了FedIA,一个轻量级的服务器端框架,旨在无需额外通信的情况下协调更新冲突。FedIA分两个阶段运行:(1)全局重要性掩码(GIM)识别共享参数子空间,以滤除领域特定的结构噪声并防止信号稀释;(2)置信感知动量加权(CAM)基于梯度可靠性动态重新加权客户端贡献,以增强稳定的优化信号。