We address distributed learning problems, both nonconvex and convex, over undirected networks. In particular, we design a novel algorithm based on the distributed Alternating Direction Method of Multipliers (ADMM) to address the challenges of high communication costs, and large datasets. Our design tackles these challenges i) by enabling the agents to perform multiple local training steps between each round of communications; and ii) by allowing the agents to employ stochastic gradients while carrying out local computations. We show that the proposed algorithm converges to a neighborhood of a stationary point, for nonconvex problems, and of an optimal point, for convex problems. We also propose a variant of the algorithm to incorporate variance reduction thus achieving exact convergence. We show that the resulting algorithm indeed converges to a stationary (or optimal) point, and moreover that local training accelerates convergence. We thoroughly compare the proposed algorithms with the state of the art, both theoretically and through numerical results.
翻译:我们研究了在无向网络上的分布式学习问题,包括非凸和凸优化问题。具体而言,我们设计了一种基于分布式交替方向乘子法(ADMM)的新算法,以应对高通信成本和大规模数据集的挑战。我们的设计通过以下方式应对这些挑战:i) 允许智能体在每轮通信之间执行多步局部训练;ii) 允许智能体在执行局部计算时采用随机梯度。我们证明,对于非凸问题,所提算法收敛到平稳点的邻域;对于凸问题,收敛到最优点的邻域。我们还提出了一种算法变体,通过引入方差缩减技术实现精确收敛。我们证明该变体算法确实收敛到平稳点(或最优点),并且局部训练加速了收敛过程。我们从理论和数值结果两方面,将所提算法与现有先进方法进行了全面比较。