Federated learning (FL) is a collaborative and privacy-preserving Machine Learning paradigm, allowing the development of robust models without the need to centralise sensitive data. A critical challenge in FL lies in fairly and accurately allocating contributions from diverse participants. Inaccurate allocation can undermine trust, lead to unfair compensation, and thus participants may lack the incentive to join or actively contribute to the federation. Various remuneration strategies have been proposed to date, including auction-based approaches and Shapley-value based methods, the latter offering a means to quantify the contribution of each participant. However, little to no work has studied the stability of these contribution evaluation methods. In this paper, we focus on calculating contributions using gradient-based model reconstruction techniques with Shapley values. We first show that baseline Shapley values do not accurately reflect clients' contributions, leading to unstable reward allocations amongst participants in a cross-silo federation. We then introduce \textsc{FedRandom}, a new method that mitigates these shortcomings with additional data samplings, and show its efficacy at increasing the stability of contribution evaluation in federated learning.
翻译:联邦学习(Federated Learning, FL)是一种协作式且保护隐私的机器学习范式,它能够在无需集中敏感数据的情况下开发鲁棒的模型。联邦学习中的一个关键挑战在于公平且准确地分配来自不同参与者的贡献。不准确的分配会破坏信任、导致不公平的补偿,从而使参与者可能缺乏加入联盟或为其积极贡献的动机。迄今为止,已提出了多种报酬分配策略,包括基于拍卖的方法和基于沙普利值(Shapley value)的方法,后者提供了一种量化每位参与者贡献的手段。然而,很少有研究探讨这些贡献评估方法的稳定性。在本文中,我们专注于使用基于梯度的模型重建技术与沙普利值来计算贡献。我们首先证明,基线沙普利值并不能准确反映客户端的贡献,导致跨机构联邦中参与者之间的奖励分配不稳定。随后,我们提出了一种新方法 \textsc{FedRandom},该方法通过额外的数据采样来缓解这些缺陷,并证明了其在提高联邦学习中贡献评估稳定性方面的有效性。