Organizations and enterprises across domains such as healthcare, finance, and scientific research are increasingly required to extract collective intelligence from distributed, siloed datasets while adhering to strict privacy, regulatory, and sovereignty requirements. Federated Learning (FL) enables collaborative model building without sharing sensitive raw data, but faces growing challenges posed by statistical heterogeneity, system diversity, and the computational burden from complex models. This study examines the potential of quantum-assisted federated learning, which could cut the number of parameters in classical models by polylogarithmic factors and thus lessen training overhead. Accordingly, we introduce QFed, a quantum-enabled federated learning framework aimed at boosting computational efficiency across edge device networks. We evaluate the proposed framework using the widely adopted FashionMNIST dataset. Experimental results show that QFed achieves a 77.6% reduction in the parameter count of a VGG-like model while maintaining an accuracy comparable to classical approaches in a scalable environment. These results point to the potential of leveraging quantum computing within a federated learning context to strengthen FL capabilities of edge devices.
翻译:在医疗、金融和科学研究等领域,各组织和企业日益需要从分布式、孤立的数据集中提取集体智能,同时遵守严格的隐私、法规和主权要求。联邦学习(FL)能够在无需共享敏感原始数据的情况下实现协作模型构建,但面临着统计异质性、系统多样性以及复杂模型带来的计算负担等日益严峻的挑战。本研究探讨了量子辅助联邦学习的潜力,该技术可将经典模型的参数量削减至多对数因子,从而降低训练开销。为此,我们提出了QFed,一种支持量子的联邦学习框架,旨在提升边缘设备网络的计算效率。我们使用广泛采用的FashionMNIST数据集对所提框架进行评估。实验结果表明,在可扩展环境中,QFed将类VGG模型的参数量减少了77.6%,同时保持了与经典方法相当的准确率。这些结果揭示了在联邦学习场景中利用量子计算以增强边缘设备联邦学习能力的潜力。