Federated Learning struggles under temporal concept drift where client data distributions shift over time. We demonstrate that standard FedAvg suffers catastrophic forgetting under seasonal drift on Fashion-MNIST, with accuracy dropping from 74% to 28%. We propose client-side experience replay, where each client maintains a small buffer of past samples mixed with current data during local training. This simple approach requires no changes to server aggregation. Experiments show that a 50-sample-per-class buffer restores performance to 78-82%, effectively preventing forgetting. Our ablation study reveals a clear memory-accuracy trade-off as buffer size increases.
翻译:联邦学习在时域概念漂移(即客户端数据分布随时间变化)场景下面临严峻挑战。本文证明,在Fashion-MNIST数据集上,标准FedAvg算法在季节性漂移下会出现灾难性遗忘现象,准确率从74%骤降至28%。我们提出客户端经验回放方法:每个客户端在本地训练期间维护一个历史样本的小型缓冲区,并将其与当前数据混合使用。该方案无需修改服务器聚合流程。实验表明,每类仅需50个样本的缓冲区即可将性能恢复至78-82%,有效防止遗忘。消融研究进一步揭示了缓冲区规模增大时存在的显著记忆-准确率权衡关系。