Federated Learning (FL) enables distributed optimization without compromising data sovereignty. Yet, where local label distributions are mutually exclusive, standard weight aggregation fails due to conflicting optimization trajectories. Often, FL methods rely on pretrained foundation models, introducing unrealistic assumptions. We introduce FederatedFactory, a zero-dependency framework that inverts the unit of federation from discriminative parameters to generative priors. By exchanging generative modules in a single communication round, our architecture supports ex nihilo synthesis of universally class balanced datasets, eliminating gradient conflict and external prior bias entirely. Evaluations across diverse medical imagery benchmarks, including MedMNIST and ISIC2019, demonstrate that our approach recovers centralized upper-bound performance. Under pathological heterogeneity, it lifts baseline accuracy from a collapsed 11.36% to 90.57% on CIFAR-10 and restores ISIC2019 AUROC to 90.57%. Additionally, this framework facilitates exact modular unlearning through the deterministic deletion of specific generative modules.
翻译:联邦学习(FL)能够在保障数据主权的前提下实现分布式优化。然而,当本地标签分布互斥时,由于优化轨迹的冲突,标准权重聚合方法会失效。现有联邦学习方法通常依赖预训练的基础模型,这引入了不切实际的假设。我们提出了联邦工厂(FederatedFactory),这是一个零依赖框架,其将联邦的基本单元从判别性参数反转为生成性先验。通过在单轮通信中交换生成模块,我们的架构支持从零生成全局类别平衡的数据集,从而完全消除梯度冲突与外部先验偏差。在包括MedMNIST与ISIC2019在内的多样化医学影像基准测试中,评估结果表明我们的方法能够恢复集中式训练的性能上限。在病态异构场景下,本方法将CIFAR-10上的基线准确率从崩溃的11.36%提升至90.57%,并将ISIC2019的AUROC恢复至90.57%。此外,该框架通过确定性删除特定生成模块,实现了精确的模块化遗忘学习。