We consider a decentralized optimization problem for networks affected by communication delays. Examples of such networks include collaborative machine learning, sensor networks, and multi-agent systems. To mimic communication delays, we add virtual non-computing nodes to the network, resulting in directed graphs. This motivates investigating decentralized optimization solutions on directed graphs. Existing solutions assume nodes know their out-degrees, resulting in limited applicability. To overcome this limitation, we introduce a novel gossip-based algorithm, called DT-GO, that does not need to know the out-degrees. The algorithm is applicable in general directed networks, for example networks with delays or limited acknowledgment capabilities. We derive convergence rates for both convex and non-convex objectives, showing that our algorithm achieves the same complexity order as centralized Stochastic Gradient Descent. In other words, the effects of the graph topology and delays are confined to higher-order terms. Additionally, we extend our analysis to accommodate time-varying network topologies. Numerical simulations are provided to support our theoretical findings.
翻译:我们研究了受通信延迟影响的网络中的分散式优化问题。这类网络的例子包括协作机器学习、传感器网络和多智能体系统。为了模拟通信延迟,我们在网络中增加了虚拟非计算节点,从而形成了有向图。这促使我们研究在有向图上的分散式优化解决方案。现有的解决方案假设节点知道自己的出度,导致适用性有限。为了克服这一限制,我们引入了一种新颖的基于八卦的算法,称为DT-GO,它不需要知道出度。该算法适用于一般的有向网络,例如具有延迟或有限确认能力的网络。我们推导了凸和非凸目标函数的收敛率,表明我们的算法实现了与集中式随机梯度下降相同的复杂度阶数。换句话说,图拓扑和延迟的影响仅限于高阶项。此外,我们扩展了分析以适应时变网络拓扑。数值模拟被提供以支持我们的理论发现。