Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML) by enabling decentralized, collaborative learning. However, FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance. Recognizing this threat, researchers have focused on developing defense mechanisms to counteract poisoning attacks in FL systems. However, existing robust FL methods predominantly focus on computer vision tasks, leaving a gap in addressing the unique challenges of FL with time series data. In this paper, we present FLORAL, a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks, even in scenarios with heterogeneous client data and a large number of adversarial participants. Unlike traditional model-centric defenses, FLORAL leverages logical reasoning to evaluate client trustworthiness by aligning their predictions with global time-series patterns, rather than relying solely on the similarity of client updates. Our approach extracts logical reasoning properties from clients, then hierarchically infers global properties, and uses these to verify client updates. Through formal logic verification, we assess the robustness of each client contribution, identifying deviations indicative of adversarial behavior. Experimental results on two datasets demonstrate the superior performance of our approach compared to existing baseline methods, highlighting its potential to enhance the robustness of FL to time series applications. Notably, FLORAL reduced the prediction error by 93.27% in the best-case scenario compared to the second-best baseline. Our code is available at https://anonymous.4open.science/r/FLORAL-Robust-FTS.
翻译:联邦学习(FL)通过实现去中心化的协作学习,为缓解集中式机器学习(ML)中的隐私问题提供了一种有前景的解决方案。然而,FL容易受到各种安全威胁,包括投毒攻击,即恶意客户端通过操纵训练数据或模型更新来降低整体模型性能。认识到这一威胁,研究人员致力于开发防御机制以应对FL系统中的投毒攻击。然而,现有的鲁棒FL方法主要集中于计算机视觉任务,在应对时间序列数据的FL所特有的挑战方面存在空白。本文提出了FLORAL,一种旨在减轻时间序列任务联邦学习中投毒攻击的防御机制,即使在客户端数据异构且存在大量对抗性参与者的情况下也能有效工作。与传统的以模型为中心的防御方法不同,FLORAL利用逻辑推理来评估客户端的可信度,其依据是客户端预测与全局时间序列模式的一致性,而非仅仅依赖客户端更新的相似性。我们的方法从客户端提取逻辑推理属性,然后分层推断全局属性,并利用这些属性来验证客户端更新。通过形式逻辑验证,我们评估每个客户端贡献的鲁棒性,识别出表明对抗行为的偏差。在两个数据集上的实验结果表明,与现有基线方法相比,我们的方法具有更优越的性能,突显了其增强FL在时间序列应用中鲁棒性的潜力。值得注意的是,在最佳情况下,与次优基线相比,FLORAL将预测误差降低了93.27%。我们的代码可在 https://anonymous.4open.science/r/FLORAL-Robust-FTS 获取。