The success of federated learning (FL) ultimately depends on how strategic participants behave under partial observability, yet most formulations still treat FL as a static optimization problem. We instead view FL deployments as governed strategic systems and develop an analytical framework that separates welfare-improving behavior from metric gaming. Within this framework, we introduce indices that quantify manipulability, the price of gaming, and the price of cooperation, and we use them to study how rules, information disclosure, evaluation metrics, and aggregator-switching policies reshape incentives and cooperation patterns. We derive threshold conditions for deterring harmful gaming while preserving benign cooperation, and for triggering auto-switch rules when early-warning indicators become critical. Building on these results, we construct a design toolkit including a governance checklist and a simple audit-budget allocation algorithm with a provable performance guarantee. Simulations across diverse stylized environments and a federated learning case study consistently match the qualitative and quantitative patterns predicted by our framework. Taken together, our results provide design principles and operational guidelines for reducing metric gaming while sustaining stable, high-welfare cooperation in FL platforms.
翻译:联邦学习(FL)的成功最终取决于策略性参与者在部分可观测性下的行为方式,然而大多数现有研究仍将FL视为静态优化问题。相反,我们将FL部署视为受治理的策略系统,并开发了一个分析框架,用以区分福利提升行为与指标博弈。在此框架内,我们引入了一系列量化指标:可操纵性指数、博弈代价与合作代价,并利用这些指标研究规则、信息披露、评估指标以及聚合器切换策略如何重塑激励与合作模式。我们推导出阈值条件,用于在保持良性合作的同时抑制有害博弈行为,并在预警指标达到临界值时触发自动切换规则。基于这些结果,我们构建了一个设计工具包,包含治理检查清单和具有可证明性能保证的简易审计预算分配算法。在多种典型化环境下的仿真实验以及一个联邦学习案例研究的结果,均与我们框架预测的定性与定量模式一致。综合而言,我们的研究为减少指标博弈、同时维持FL平台稳定且高福利的合作提供了设计原则与操作指南。