Federated learning (FL) on graphs shows promise for distributed time-series forecasting. Yet, existing methods rely on static topologies and struggle with client heterogeneity. We propose Fed-GAME, a framework that models personalized aggregation as message passing over a learnable dynamic implicit graph. The core is a decoupled parameter difference-based update protocol, where clients transmit parameter differences between their fine-tuned private model and a shared global model. On the server, these differences are decomposed into two streams: (1) averaged difference used to updating the global model for consensus (2) the selective difference fed into a novel Graph Attention Mixture-of-Experts (GAME) aggregator for fine-grained personalization. In this aggregator, shared experts provide scoring signals while personalized gates adaptively weight selective updates to support personalized aggregation. Experiments on two real-world electric vehicle charging datasets demonstrate that Fed-GAME outperforms state-of-the-art personalized FL baselines.
翻译:图上的联邦学习为分布式时间序列预测提供了前景。然而,现有方法依赖于静态拓扑结构,且难以处理客户端异构性。我们提出了Fed-GAME框架,该框架将个性化聚合建模为在可学习的动态隐式图上的消息传递。其核心是一个解耦的、基于参数差异的更新协议:客户端传输其微调后的私有模型与共享全局模型之间的参数差异。在服务器端,这些差异被分解为两个流:(1) 平均差异用于更新全局模型以达成共识;(2) 选择性差异被输入到一个新颖的图注意力专家混合聚合器中,以实现细粒度的个性化。在该聚合器中,共享专家提供评分信号,而个性化门则自适应地对选择性更新进行加权,以支持个性化聚合。在两个真实世界电动汽车充电数据集上的实验表明,Fed-GAME优于最先进的个性化联邦学习基线方法。