Federated Neuromorphic Learning (FNL) enables energy-efficient and privacy-preserving learning on devices without centralizing data. However, real-world deployments require additional privacy mechanisms that can significantly alter training signals. This paper analyzes how Differential Privacy (DP) mechanisms, specifically gradient clipping and noise injection, perturb firing-rate statistics in Spiking Neural Networks (SNNs) and how these perturbations are propagated to rate-based FNL coordination. On a speech recognition task under non-IID settings, ablations across privacy budgets and clipping bounds reveal systematic rate shifts, attenuated aggregation, and ranking instability during client selection. Moreover, we relate these shifts to sparsity and memory indicators. Our findings provide actionable guidance for privacy-preserving FNL, specifically regarding the balance between privacy strength and rate-dependent coordination.
翻译:联邦神经形态学习(FNL)能够在不集中数据的情况下,在设备上实现高能效且保护隐私的学习。然而,实际部署需要额外的隐私保护机制,这些机制可能显著改变训练信号。本文分析了差分隐私(DP)机制——特别是梯度裁剪和噪声注入——如何扰动脉冲神经网络(SNNs)中的发放率统计量,以及这些扰动如何传播到基于发放率的FNL协调中。在非独立同分布设置下的语音识别任务中,通过对隐私预算和裁剪边界的消融实验,揭示了客户端选择过程中存在的系统性发放率偏移、聚合衰减和排名不稳定性。此外,我们将这些偏移与稀疏性和记忆指标相关联。我们的研究结果为保护隐私的FNL提供了可行的指导,特别是在隐私强度与基于发放率的协调之间的平衡方面。