Federated learning (FL) schemes allow multiple participants to collaboratively train neural networks without the need to directly share the underlying data.However, in early schemes, all participants eventually obtain the same model. Moreover, the aggregation is typically carried out by a third party, who obtains combined gradients or weights, which may reveal the model. These downsides underscore the demand for fair and privacy-preserving FL schemes. Here, collaborative fairness asks for individual model quality depending on the individual data contribution. Privacy is demanded with respect to any kind of data outsourced to the third party. Now, there already exist some approaches aiming for either fair or privacy-preserving FL and a few works even address both features. In our paper, we build upon these seminal works and present a novel, fair and privacy-preserving FL scheme. Our approach, which mainly relies on homomorphic encryption, stands out for exclusively using local gradients. This increases the usability in comparison to state-of-the-art approaches and thereby opens the door to applications in control.
翻译:联邦学习(FL)方案允许多个参与方协作训练神经网络,而无需直接共享底层数据。然而,在早期方案中,所有参与方最终获得的是相同的模型。此外,聚合过程通常由第三方执行,该方获取组合后的梯度或权重,这可能泄露模型信息。这些缺点凸显了对公平且保护隐私的联邦学习方案的需求。其中,协作公平性要求个体模型的质量取决于其各自的数据贡献。隐私性则要求针对外包给第三方的任何类型数据都得到保护。目前,已经存在一些旨在实现公平或保护隐私的联邦学习方案,少数工作甚至同时兼顾这两个特性。在本文中,我们基于这些开创性工作,提出了一种新颖的、公平且保护隐私的联邦学习方案。我们的方法主要依赖于同态加密,其突出特点是仅使用本地梯度。与现有先进方案相比,这提高了方案的可用性,从而为在控制领域的应用打开了大门。