The problem of distributed optimization requires a group of networked agents to compute a parameter that minimizes the average of their local cost functions. While there are a variety of distributed optimization algorithms that can solve this problem, they are typically vulnerable to ``Byzantine'' agents that do not follow the algorithm. Recent attempts to address this issue focus on single dimensional functions, or assume certain statistical properties of the functions at the agents. In this paper, we provide two resilient, scalable, distributed optimization algorithms for multi-dimensional functions. Our schemes involve two filters, (1) a distance-based filter and (2) a min-max filter, which each remove neighborhood states that are extreme (defined precisely in our algorithms) at each iteration. We show that these algorithms can mitigate the impact of up to $F$ (unknown) Byzantine agents in the neighborhood of each regular agent. In particular, we show that if the network topology satisfies certain conditions, all of the regular agents' states are guaranteed to converge to a bounded region that contains the minimizer of the average of the regular agents' functions.
翻译:分布式优化问题需要一组网络化智能体计算一个参数,以最小化其局部成本函数的平均值。尽管存在多种分布式优化算法可解决此问题,但这些算法通常容易受到不遵循算法的"拜占庭"智能体的攻击。近期解决该问题的尝试聚焦于单维函数,或假设智能体函数的特定统计特性。本文针对多维函数提出了两种具有弹性且可扩展的分布式优化算法。我们的方案包含两个滤波器:(1)基于距离的滤波器与(2)最小-最大滤波器,两者在每次迭代中移除处于极端状态的邻域状态(具体定义见算法)。我们证明这些算法能有效缓解每个常规智能体邻域内最多$F$个(未知)拜占庭智能体的影响。特别地,当网络拓扑满足特定条件时,所有常规智能体的状态将保证收敛至包含常规智能体函数平均值最小化器的有界区域内。