Traditional Federated Domain Generalization (FedDG) methods focus on learning domain-invariant features or adapting to unseen target domains, often overlooking the unique knowledge embedded within the source domain, especially in strictly isolated federated learning environments. Through experimentation, we discovered a counterintuitive phenomenon: features learned from a complete source domain have superior generalization capabilities compared to those learned directly from the target domain. This insight leads us to propose the Federated Source Domain Awareness Framework (FedSDAF), the first systematic approach to enhance FedDG by leveraging source domain-aware features. FedSDAF employs a dual-adapter architecture that decouples "local expertise" from "global generalization consensus." A Domain-Aware Adapter, retained locally, extracts and protects the unique discriminative knowledge of each source domain, while a Domain-Invariant Adapter, shared across clients, builds a robust global consensus. To enable knowledge exchange, we introduce a Bidirectional Knowledge Distillation mechanism that facilitates efficient dialogue between the adapters. Extensive experiments on four benchmark datasets (OfficeHome, PACS, VLCS, and DomainNet) show that FedSDAF significantly outperforms existing FedDG methods. The source code is available at https://github.com/pizzareapers/FedSDAF.
翻译:传统的联邦域泛化方法侧重于学习域不变特征或适应未见目标域,往往忽视了源域内蕴含的独特知识,尤其是在严格隔离的联邦学习环境中。通过实验,我们发现了一个反直觉的现象:与直接从目标域学习的特征相比,从完整源域学习的特征具有更优越的泛化能力。这一洞见促使我们提出了联邦源域感知框架,这是首个通过利用源域感知特征来增强联邦域泛化的系统性方法。FedSDAF采用双适配器架构,将“局部专业知识”与“全局泛化共识”解耦。一个本地保留的域感知适配器提取并保护每个源域独特的判别性知识,而一个跨客户端共享的域不变适配器则构建稳健的全局共识。为了实现知识交换,我们引入了双向知识蒸馏机制,以促进适配器之间的高效对话。在四个基准数据集(OfficeHome、PACS、VLCS和DomainNet)上的大量实验表明,FedSDAF显著优于现有的联邦域泛化方法。源代码可在 https://github.com/pizzareapers/FedSDAF 获取。