Federated Learning (FL) is a collaborative method for training machine learning models while preserving the confidentiality of the participants' training data. Nevertheless, FL is vulnerable to reconstruction attacks that exploit shared parameters to reveal private training data. In this paper, we address this issue in the cybersecurity domain by applying Multi-Input Functional Encryption (MIFE) to a recent FL implementation for training ML-based network intrusion detection systems. We assess both classical and post-quantum solutions in terms of memory cost and computational overhead in the FL process, highlighting their impact on convergence time.
翻译:联邦学习(FL)是一种在保护参与者训练数据机密性的同时协作训练机器学习模型的方法。然而,FL容易受到重建攻击,此类攻击可利用共享参数来泄露私有训练数据。本文针对网络安全领域中的这一问题,将多输入功能加密(MIFE)应用于近期一种用于训练基于机器学习的网络入侵检测系统的FL实现方案。我们从FL过程中的内存开销与计算负载两方面评估了经典方案与后量子方案,并重点分析了它们对收敛时间的影响。