We present a distributed quasi-Newton (DQN) method, which enables a group of agents to compute an optimal solution of a separable multi-agent optimization problem locally using an approximation of the curvature of the aggregate objective function. Each agent computes a descent direction from its local estimate of the aggregate Hessian, obtained from quasi-Newton approximation schemes using the gradient of its local objective function. Moreover, we introduce a distributed quasi-Newton method for equality-constrained optimization (EC-DQN), where each agent takes Karush-Kuhn-Tucker-like update steps to compute an optimal solution. In our algorithms, each agent communicates with its one-hop neighbors over a peer-to-peer communication network to compute a common solution. We prove convergence of our algorithms to a stationary point of the optimization problem. In addition, we demonstrate the competitive empirical convergence of our algorithm in both well-conditioned and ill-conditioned optimization problems, in terms of the computation time and communication cost incurred by each agent for convergence, compared to existing distributed first-order and second-order methods. Particularly, in ill-conditioned problems, our algorithms achieve a faster computation time for convergence, while requiring a lower communication cost, across a range of communication networks with different degrees of connectedness.
翻译:本文提出了一种分布式拟牛顿方法,使得一组智能体能够利用聚合目标函数曲率的局部近似值,在本地计算可分离多智能体优化问题的最优解。每个智能体通过拟牛顿近似方案,利用其局部目标函数的梯度信息来估计聚合Hessian矩阵,并据此计算下降方向。此外,我们针对等式约束优化问题提出了分布式拟牛顿方法,其中每个智能体采用类Karush-Kuhn-Tucker更新步骤来计算最优解。在我们的算法框架中,各智能体通过点对点通信网络与单跳邻居进行信息交换,以协同计算公共解。我们证明了所提算法能够收敛至优化问题的稳定点。通过与现有分布式一阶及二阶方法的对比实验,我们在计算时间和通信成本两个维度上验证了本算法在良态与病态优化问题中均具有竞争力的收敛性能。特别是在病态问题中,无论通信网络的连通度如何变化,我们的算法都能以更短的计算时间和更低的通信成本实现收敛。