Federated Learning (FL) enables collaborative training without centralizing data, essential for privacy compliance in real-world scenarios involving sensitive visual information. Most FL approaches rely on expensive, iterative deep network optimization, which still risks privacy via shared gradients. In this work, we propose FedHENet, extending the FedHEONN framework to image classification. By using a fixed, pre-trained feature extractor and learning only a single output layer, we avoid costly local fine-tuning. This layer is learned by analytically aggregating client knowledge in a single round of communication using homomorphic encryption (HE). Experiments show that FedHENet achieves competitive accuracy compared to iterative FL baselines while demonstrating superior stability performance and up to 70\% better energy efficiency. Crucially, our method is hyperparameter-free, removing the carbon footprint associated with hyperparameter tuning in standard FL. Code available in https://github.com/AlejandroDopico2/FedHENet/
翻译:联邦学习(Federated Learning, FL)能够在无需集中数据的情况下实现协同训练,这对于涉及敏感视觉信息的实际场景中的隐私合规至关重要。大多数联邦学习方法依赖于昂贵且迭代的深度网络优化,而通过共享梯度仍存在隐私风险。在本工作中,我们提出FedHENet,将FedHEONN框架扩展至图像分类任务。通过使用固定的预训练特征提取器并仅学习单个输出层,我们避免了昂贵的本地微调。该输出层通过使用同态加密(Homomorphic Encryption, HE)在单轮通信中分析性地聚合客户端知识来学习。实验表明,与迭代式联邦学习基线方法相比,FedHENet在保持竞争力的准确率的同时,展现出更优的稳定性表现以及高达70%的能效提升。至关重要的是,我们的方法无需超参数调整,从而消除了标准联邦学习中超参数调优所产生的碳足迹。代码发布于 https://github.com/AlejandroDopico2/FedHENet/