We present a distributed conjugate gradient method for distributed optimization problems, where each agent computes an optimal solution of the problem locally without any central computation or coordination, while communicating with its immediate, one-hop neighbors over a communication network. Each agent updates its local problem variable using an estimate of the average conjugate direction across the network, computed via a dynamic consensus approach. Our algorithm enables the agents to use uncoordinated step-sizes. We prove convergence of the local variable of each agent to the optimal solution of the aggregate optimization problem, without requiring decreasing step-sizes. In addition, we demonstrate the efficacy of our algorithm in distributed state estimation problems, and its robust counterparts, where we show its performance compared to existing distributed first-order optimization methods.
翻译:我们提出了一种面向分布式优化问题的分布式共轭梯度方法。在该方法中,每个智能体无需中央计算或协调,仅通过与通信网络中直接相连的一跳邻居进行信息交换,即可在本地点独立计算问题的最优解。每个智能体利用动态一致性方法估计网络中的平均共轭方向,并据此更新其局部问题变量。本算法允许智能体使用非协调的步长参数。我们证明了在无需递减步长的情况下,每个智能体的局部变量均能收敛至全局优化问题的最优解。此外,我们将该算法应用于分布式状态估计问题及其鲁棒变体,通过与现有分布式一阶优化方法的性能对比,验证了本算法的有效性。