We study distributed learning in the context of gradient-free zero-order optimisation and introduce FedZero, a federated zero-order algorithm with sharp theoretical guarantees. Our contributions are threefold. First, in the federated convex setting, we derive high-probability guarantees for regret minimisation achieved by FedZero. Second, in the single-worker regime, corresponding to the classical zero-order framework with two-point feedback, we establish the first high-probability convergence guarantees for convex zero-order optimisation, strengthening previous results that held only in expectation. Third, to establish these guarantees, we develop novel concentration tools: (i) concentration inequalities with explicit constants for Lipschitz functions under the uniform measure on the $\ell_1$-sphere, and (ii) a time-uniform concentration inequality for squared sub-Gamma random variables. These probabilistic results underpin our high-probability guarantees and may also be of independent interest.
翻译:我们研究了梯度无零阶优化背景下的分布式学习,并提出了FedZero——一种具有严格理论保证的联邦零阶算法。我们的贡献有三方面。首先,在联邦凸优化设定下,我们推导了FedZero实现遗憾最小化的高概率保证。其次,在单工作节点机制(对应经典的双点反馈零阶框架)中,我们首次建立了凸零阶优化的高概率收敛保证,强化了先前仅在期望意义上成立的结果。第三,为建立这些保证,我们开发了新颖的集中性工具:(i) 在$\ell_1$球面均匀测度下利普希茨函数的显式常数集中不等式,以及(ii) 平方次伽马随机变量的时间一致集中不等式。这些概率结果支撑了我们的高概率保证,也可能具有独立的研究价值。