Spiking Federated Learning (SFL) has been widely studied with the energy efficiency of Spiking Neural Networks (SNNs). However, existing SFL methods require model homogeneity and assume all clients have sufficient computational resources, resulting in the exclusion of some resource-constrained clients. To address the prevalent system heterogeneity in real-world scenarios, enabling heterogeneous SFL systems that allow clients to adaptively deploy models of different scales based on their local resources is crucial. To this end, we introduce SFedHIFI, a novel Spiking Federated Learning framework with Fire Rate-Based Heterogeneous Information Fusion. Specifically, SFedHIFI employs channel-wise matrix decomposition to deploy SNN models of adaptive complexity on clients with heterogeneous resources. Building on this, the proposed heterogeneous information fusion module enables cross-scale aggregation among models of different widths, thereby enhancing the utilization of diverse local knowledge. Extensive experiments on three public benchmarks demonstrate that SFedHIFI can effectively enable heterogeneous SFL, consistently outperforming all three baseline methods. Compared with ANN-based FL, it achieves significant energy savings with only a marginal trade-off in accuracy.
翻译:脉冲联邦学习(SFL)凭借脉冲神经网络(SNNs)的能效优势得到了广泛研究。然而,现有的SFL方法要求模型同构,并假设所有客户端都拥有充足的计算资源,这导致部分资源受限的客户端被排除在外。为了解决现实场景中普遍存在的系统异构性问题,构建异构SFL系统至关重要,该系统允许客户端根据其本地资源自适应地部署不同规模的模型。为此,我们提出了SFedHIFI,一种新颖的基于发放率的异构信息融合脉冲联邦学习框架。具体而言,SFedHIFI采用通道级矩阵分解,在具有异构资源的客户端上部署自适应复杂度的SNN模型。在此基础上,所提出的异构信息融合模块实现了不同宽度模型间的跨尺度聚合,从而提升了对多样化本地知识的利用效率。在三个公开基准数据集上进行的大量实验表明,SFedHIFI能够有效实现异构SFL,其性能持续优于所有三种基线方法。与基于人工神经网络(ANN)的联邦学习相比,SFedHIFI在仅牺牲少量精度的情况下实现了显著的节能效果。