It is well-known that Federated Learning (FL) is vulnerable to manipulated updates from clients. In this work we study the impact of data heterogeneity on clients' incentives to manipulate their updates. We formulate a game in which clients may upscale their gradient updates in order to ``steer'' the server model to their advantage. We develop a payment rule that disincentivizes sending large gradient updates, and steers the clients towards truthfully reporting their gradients. We also derive explicit bounds on the clients' payments and the convergence rate of the global model, which allows us to study the trade-off between heterogeneity, payments and convergence.
翻译:众所周知,联邦学习(FL)容易受到客户端篡改更新的攻击。本文研究了数据异构性对客户端操纵其更新动机的影响。我们构建了一个博弈模型,其中客户端可能通过放大其梯度更新来“引导”服务器模型以获取自身优势。我们设计了一种支付规则,以抑制发送过大的梯度更新,并引导客户端如实报告其梯度。我们还推导了客户端支付与全局模型收敛速率的显式边界,这使我们能够研究异构性、支付与收敛之间的权衡关系。