This paper proposes a novel distributed semismooth Newton based augmented Lagrangian method for solving a class of optimization problems over networks, where the global objective is defined as the sum of locally held cost functions, and communication is restricted to neighboring agents. Specifically, we employ the augmented Lagrangian method to solve an equivalently reformulated constrained version of the original problem. Each resulting subproblem is solved inexactly via a distributed semismooth Newton method. By fully leveraging the structure of the generalized Hessian, a distributed accelerated proximal gradient method is proposed to compute the Newton direction efficiently, eliminating the need to communicate with full Hessian matrices. Theoretical results are also obtained to guarantee the convergence of the proposed algorithm. Numerical experiments demonstrate the efficiency and superiority of our algorithm compared to state-of-the-art distributed algorithms.
翻译:本文提出了一种新颖的基于分布式半光滑牛顿法的增广拉格朗日方法,用于求解一类网络上的优化问题,其中全局目标函数定义为局部持有的成本函数之和,且通信仅限于相邻智能体。具体而言,我们采用增广拉格朗日方法求解原始问题经等价重构后的约束版本。每个生成的子问题通过分布式半光滑牛顿法进行非精确求解。通过充分利用广义海森矩阵的结构,提出了一种分布式加速近端梯度法来高效计算牛顿方向,从而避免了传递完整海森矩阵的通信开销。本文还获得了理论结果以保证所提算法的收敛性。数值实验表明,相较于现有先进分布式算法,本算法具有更高的效率与优越性。