Federated Learning (FL) is essential for efficient data exchange in Internet of Things (IoT) environments, as it trains Machine Learning (ML) models locally and shares only model updates. However, FL is vulnerable to privacy threats like model inversion and membership inference attacks, which can expose sensitive training data. To address these privacy concerns, Differential Privacy (DP) mechanisms are often applied. Yet, adding DP noise to black-box ML models degrades performance, especially in dynamic IoT systems where continuous, lifelong FL learning accumulates excessive noise over time. To mitigate this issue, we introduce Federated HyperDimensional computing with Privacy-preserving (FedHDPrivacy), an eXplainable Artificial Intelligence (XAI) framework that combines the neuro-symbolic paradigm with DP. FedHDPrivacy carefully manages the balance between privacy and performance by theoretically tracking cumulative noise from previous rounds and adding only the necessary incremental noise to meet privacy requirements. In a real-world case study involving in-process monitoring of manufacturing machining operations, FedHDPrivacy demonstrates robust performance, outperforming standard FL frameworks-including Federated Averaging (FedAvg), Federated Stochastic Gradient Descent (FedSGD), Federated Proximal (FedProx), Federated Normalized Averaging (FedNova), and Federated Adam (FedAdam)-by up to 38%. FedHDPrivacy also shows potential for future enhancements, such as multimodal data fusion.
翻译:联邦学习(FL)对于物联网(IoT)环境中的高效数据交换至关重要,因为它能在本地训练机器学习(ML)模型并仅共享模型更新。然而,FL易受模型反演和成员推理攻击等隐私威胁,这些攻击可能暴露敏感的训练数据。为解决这些隐私问题,通常采用差分隐私(DP)机制。然而,向黑盒ML模型添加DP噪声会降低性能,尤其是在动态物联网系统中,持续、终身的FL学习会随时间累积过量的噪声。为缓解此问题,我们引入了隐私保护联邦超维计算(FedHDPrivacy),这是一个将神经符号范式与DP相结合的可解释人工智能(XAI)框架。FedHDPrivacy通过理论追踪先前轮次的累积噪声,并仅添加必要的增量噪声以满足隐私要求,从而精细地管理隐私与性能之间的平衡。在一个涉及制造加工过程实时监控的实际案例研究中,FedHDPrivacy展现出稳健的性能,优于包括联邦平均(FedAvg)、联邦随机梯度下降(FedSGD)、联邦近端(FedProx)、联邦归一化平均(FedNova)和联邦Adam(FedAdam)在内的标准FL框架,性能提升高达38%。FedHDPrivacy还显示出未来增强的潜力,例如多模态数据融合。