Federated Learning (FL) is a form of distributed learning that allows multiple institutions or clients to collaboratively learn a global model to solve a task. This allows the model to utilize the information from every institute while preserving data privacy. However, recent studies show that the promise of protecting the privacy of data is not upheld by existing methods and that it is possible to recreate the training data from the different institutions. This is done by utilizing gradients transferred between the clients and the global server during training or by knowing the model architecture at the client end. In this paper, we propose a federated learning framework for semantic segmentation without knowing the model architecture nor transferring gradients between the client and the server, thus enabling better privacy preservation. We propose BlackFed - a black-box adaptation of neural networks that utilizes zero order optimization (ZOO) to update the client model weights and first order optimization (FOO) to update the server weights. We evaluate our approach on several computer vision and medical imaging datasets to demonstrate its effectiveness. To the best of our knowledge, this work is one of the first works in employing federated learning for segmentation, devoid of gradients or model information exchange. Code: https://github.com/JayParanjape/blackfed/tree/master
翻译:联邦学习(FL)是一种分布式学习范式,允许多个机构或客户端协作学习一个全局模型以解决特定任务。这种方法使得模型能够利用所有机构的信息,同时保护数据隐私。然而,近期研究表明,现有方法未能充分实现保护数据隐私的承诺,且有可能从不同机构重建训练数据。这通常通过利用训练期间客户端与全局服务器之间传输的梯度,或通过获知客户端的模型架构来实现。本文提出一种联邦学习框架,用于语义分割任务,该框架既无需知晓模型架构,也无需在客户端与服务器之间传输梯度,从而实现了更好的隐私保护。我们提出了BlackFed——一种神经网络的黑盒自适应方法,该方法利用零阶优化(ZOO)更新客户端模型权重,并利用一阶优化(FOO)更新服务器权重。我们在多个计算机视觉和医学影像数据集上评估了所提方法的有效性。据我们所知,本工作是首个在分割任务中应用联邦学习且无需梯度或模型信息交换的研究之一。代码:https://github.com/JayParanjape/blackfed/tree/master