Heterogeneity within data distribution poses a challenge in many modern federated learning tasks. We formalize it as an optimization problem involving a computationally heavy composite under data similarity. By employing different sets of assumptions, we present several approaches to develop communication-efficient methods. An optimal algorithm is proposed for the convex case. The constructed theory is validated through a series of experiments across various problems.
翻译:数据分布的异构性是现代联邦学习任务中的一大挑战。本文将其形式化为一个在数据相似性条件下涉及计算密集型复合项的优化问题。通过采用不同的假设集,我们提出了多种构建通信高效方法的技术路径。针对凸情形,我们提出了一种最优算法。所构建的理论通过一系列跨不同问题的实验得到了验证。