Decentralized federated learning (D-FL) enables privacy-preserving training without a central server, but multi-hop model exchanges and aggregation are often bottlenecked by communication resource constraints. To address this issue, we propose a joint routing-and-pruning framework that optimizes routing paths and pruning rates to maintain communication latency within prescribed limits. We analyze how the sum of model biases across all clients affects the convergence bound of D-FL and formulate an optimization problem that maximizes the model retention rate to minimize these biases under communication constraints. Further analysis reveals that each client's model retention rate is path-dependent, which reduces the original problem to a routing optimization. Leveraging this insight, we develop a routing algorithm that selects latency-efficient transmission paths, allowing more parameters to be delivered within the time budget and thereby improving D-FL convergence. Simulations demonstrate that, compared with unpruned systems, the proposed framework reduces average transmission latency by 27.8% and improves testing accuracy by approximately 12%. Furthermore, relative to standard benchmark routing algorithms, the proposed routing method improves accuracy by roughly 8%.
翻译:去中心化联邦学习(D-FL)支持无需中心服务器的隐私保护训练,但多跳模型交换与聚合常受限于通信资源约束。为解决此问题,我们提出一种联合路由与剪枝框架,通过优化路由路径与剪枝率,将通信延迟维持在预设限值内。我们分析了所有客户端模型偏差之和如何影响D-FL的收敛界,并构建了一个优化问题,目标是在通信约束下最大化模型保留率以最小化这些偏差。进一步分析表明,每个客户端的模型保留率具有路径依赖性,从而将原问题简化为路由优化问题。基于此发现,我们开发了一种路由算法,选择延迟高效的传输路径,使更多参数能在时间预算内完成传输,从而提升D-FL收敛性能。仿真实验表明,相较于未剪枝系统,所提框架将平均传输延迟降低27.8%,测试准确率提升约12%。此外,相对于标准基准路由算法,所提路由方法将准确率提升约8%。