To alleviate the training burden in federated learning while enhancing convergence speed, Split Federated Learning (SFL) has emerged as a promising approach by combining the advantages of federated and split learning. However, recent studies have largely overlooked competitive situations. In this framework, the SFL model owner can choose the cut layer to balance the training load between the server and clients, ensuring the necessary level of privacy for the clients. Additionally, the SFL model owner sets incentives to encourage client participation in the SFL process. The optimization strategies employed by the SFL model owner influence clients' decisions regarding the amount of data they contribute, taking into account the shared incentives over clients and anticipated energy consumption during SFL. To address this framework, we model the problem using a hierarchical decision-making approach, formulated as a single-leader multi-follower Stackelberg game. We demonstrate the existence and uniqueness of the Nash equilibrium among clients and analyze the Stackelberg equilibrium by examining the leader's game. Furthermore, we discuss privacy concerns related to differential privacy and the criteria for selecting the minimum required cut layer. Our findings show that the Stackelberg equilibrium solution maximizes the utility for both the clients and the SFL model owner.
翻译:为减轻联邦学习的训练负担并提升收敛速度,分割联邦学习(SFL)结合了联邦学习与分割学习的优势,成为一种前景广阔的方法。然而,现有研究大多忽视了竞争性场景。在此框架下,SFL模型所有者可通过选择分割层来平衡服务器与客户端之间的训练负载,同时确保客户端所需的隐私保护水平。此外,SFL模型所有者通过设置激励机制以鼓励客户端参与SFL过程。SFL模型所有者采用的优化策略会影响客户端关于数据贡献量的决策,该决策需综合考虑客户端间共享的激励以及SFL过程中的预期能耗。为解决此框架下的问题,我们采用分层决策方法将问题建模为单领导者多追随者的斯塔克尔伯格博弈。我们证明了客户端间纳什均衡的存在性与唯一性,并通过分析领导者博弈探讨了斯塔克尔伯格均衡。此外,我们还讨论了与差分隐私相关的隐私问题以及最小必要分割层的选择标准。研究结果表明,斯塔克尔伯格均衡解能同时最大化客户端与SFL模型所有者的效用。