Federated Learning (FL) shows promise in preserving privacy and enabling collaborative learning. However, most current solutions focus on private data collected from a single domain. A significant challenge arises when client data comes from diverse domains (i.e., domain shift), leading to poor performance on unseen domains. Existing Federated Domain Generalization approaches address this problem but assume each client holds data for an entire domain, limiting their practicality in real-world scenarios with domain-based heterogeneity and client sampling. To overcome this, we introduce FISC, a novel FL domain generalization paradigm that handles more complex domain distributions across clients. FISC enables learning across domains by extracting an interpolative style from local styles and employing contrastive learning. This strategy gives clients multi-domain representations and unbiased convergent targets. Empirical results on multiple datasets, including PACS, Office-Home, and IWildCam, show FISC outperforms state-of-the-art (SOTA) methods. Our method achieves accuracy improvements ranging from 3.64% to 57.22% on unseen domains. Our code is available at https://anonymous.4open.science/r/FISC-AAAI-16107.
翻译:联邦学习(FL)在保护隐私和实现协同学习方面展现出潜力。然而,当前大多数解决方案集中于处理来自单一领域的私有数据。当客户端数据来自不同领域(即存在域偏移)时,一个重大挑战随之产生,这导致模型在未见领域上表现不佳。现有的联邦域泛化方法致力于解决此问题,但假设每个客户端持有整个领域的数据,这限制了其在具有领域异构性和客户端采样的实际场景中的适用性。为克服此限制,我们提出了FISC,一种新颖的联邦学习域泛化范式,能够处理跨客户端更复杂的领域分布。FISC通过从局部风格中提取插值风格并采用对比学习,实现了跨领域学习。这一策略为客户端提供了多领域表示和无偏收敛目标。在包括PACS、Office-Home和IWildCam在内的多个数据集上的实证结果表明,FISC优于现有最先进(SOTA)方法。我们的方法在未见领域上实现了3.64%至57.22%的准确率提升。我们的代码可在 https://anonymous.4open.science/r/FISC-AAAI-16107 获取。