Federated learning (FL) enables collaborative model training without sharing raw data in edge environments, but is constrained by limited communication bandwidth and heterogeneous client data distributions. Prototype-based FL mitigates this issue by exchanging class-wise feature prototypes instead of full model parameters; however, existing methods still suffer from suboptimal generalization under severe communication constraints. In this paper, we propose RefProtoFL, a communication-efficient FL framework that integrates External-Referenced Prototype Alignment (ERPA) for representation consistency with Adaptive Probabilistic Update Dropping (APUD) for communication efficiency. Specifically, we decompose the model into a private backbone and a lightweight shared adapter, and restrict federated communication to the adapter parameters only. To further reduce uplink cost, APUD performs magnitude-aware Top-K sparsification, transmitting only the most significant adapter updates for server-side aggregation. To address representation inconsistency across heterogeneous clients, ERPA leverages a small server-held public dataset to construct external reference prototypes that serve as shared semantic anchors. For classes covered by public data, clients directly align local representations to public-induced prototypes, whereas for uncovered classes, alignment relies on server-aggregated global reference prototypes via weighted averaging. Extensive experiments on standard benchmarks demonstrate that RefProtoFL attains higher classification accuracy than state-of-the-art prototype-based FL baselines.
翻译:联邦学习(FL)使得边缘环境中的协同模型训练无需共享原始数据,但受到有限通信带宽和异构客户端数据分布的限制。基于原型的联邦学习通过交换类级特征原型而非完整模型参数来缓解这一问题;然而,现有方法在严苛的通信约束下仍存在泛化性能欠佳的问题。本文提出RefProtoFL,一种通信高效的联邦学习框架,它集成了用于表征一致性的外部参考原型对齐(ERPA)和用于通信效率的自适应概率更新丢弃(APUD)。具体而言,我们将模型分解为私有骨干网络和轻量级共享适配器,并将联邦通信限制在仅适配器参数上。为进一步降低上行链路成本,APUD执行幅度感知的Top-K稀疏化,仅传输最重要的适配器更新以进行服务器端聚合。为解决异构客户端间的表征不一致问题,ERPA利用服务器持有的少量公共数据集构建外部参考原型,作为共享的语义锚点。对于公共数据覆盖的类别,客户端直接将局部表征对齐到公共数据诱导的原型;而对于未覆盖的类别,则通过加权平均依赖于服务器聚合的全局参考原型进行对齐。在标准基准上的大量实验表明,RefProtoFL比最先进的基于原型的联邦学习基线获得了更高的分类准确率。