Domain shift is a formidable issue in Machine Learning that causes a model to suffer from performance degradation when tested on unseen domains. Federated Domain Generalization (FedDG) attempts to train a global model using collaborative clients in a privacy-preserving manner that can generalize well to unseen clients possibly with domain shift. However, most existing FedDG methods either cause additional privacy risks of data leakage or induce significant costs in client communication and computation, which are major concerns in the Federated Learning paradigm. To circumvent these challenges, here we introduce a novel architectural method for FedDG, namely gPerXAN, which relies on a normalization scheme working with a guiding regularizer. In particular, we carefully design Personalized eXplicitly Assembled Normalization to enforce client models selectively filtering domain-specific features that are biased towards local data while retaining discrimination of those features. Then, we incorporate a simple yet effective regularizer to guide these models in directly capturing domain-invariant representations that the global model's classifier can leverage. Extensive experimental results on two benchmark datasets, i.e., PACS and Office-Home, and a real-world medical dataset, Camelyon17, indicate that our proposed method outperforms other existing methods in addressing this particular problem.
翻译:域偏移是机器学习中的一个严峻问题,它会导致模型在未见过的域上测试时性能下降。联邦域泛化旨在以隐私保护的方式,利用协作客户端训练一个能够良好泛化至可能发生域偏移的未见客户端的全局模型。然而,现有的大多数联邦域泛化方法要么会带来额外的数据泄露隐私风险,要么会导致客户端通信和计算成本显著增加,这些都是联邦学习范式中的主要关切点。为了规避这些挑战,本文提出了一种新颖的联邦域泛化架构方法,即gPerXAN,该方法依赖于一个与引导正则化器协同工作的归一化方案。具体而言,我们精心设计了"个性化显式组装归一化",以强制客户端模型有选择性地过滤偏向于本地数据的域特定特征,同时保留这些特征的判别性。然后,我们引入了一个简单而有效的正则化器,来引导这些模型直接捕获全局模型分类器可以利用的域不变表示。在两个基准数据集(即PACS和Office-Home)以及一个真实世界的医学数据集Camelyon17上进行的大量实验结果表明,我们提出的方法在解决这一特定问题上优于其他现有方法。