We study the problem of minimizing the expectation of smooth nonconvex functions with the help of several parallel workers whose role is to compute stochastic gradients. In particular, we focus on the challenging situation where the workers' compute times are arbitrarily heterogeneous and random. In the simpler regime characterized by arbitrarily heterogeneous but deterministic compute times, Tyurin and Richt\'arik (NeurIPS 2023) recently designed the first theoretically optimal asynchronous SGD method, called Rennala SGD, in terms of a novel complexity notion called time complexity. The starting point of our work is the observation that Rennala SGD can have arbitrarily bad performance in the presence of random compute times -- a setting it was not designed to handle. To advance our understanding of stochastic optimization in this challenging regime, we propose a new asynchronous SGD method, for which we coin the name MindFlayer SGD. Our theory and empirical results demonstrate the superiority of MindFlayer SGD over existing baselines, including Rennala SGD, in cases when the noise is heavy tailed.
翻译:我们研究了在多个并行工作节点(其作用是计算随机梯度)的协助下最小化光滑非凸函数期望的问题。特别地,我们关注工作节点计算时间任意异构且随机这一具有挑战性的情况。在计算时间任意异构但确定的较简单机制中,Tyurin和Richt\'arik(NeurIPS 2023)最近设计了一种理论上最优的异步SGD方法,称为Rennala SGD,其依据是一种称为时间复杂度的新复杂度概念。我们工作的出发点是观察到:在存在随机计算时间的情况下(该情况并非其设计目标),Rennala SGD可能表现出任意差的性能。为了深化对此挑战性机制中随机优化的理解,我们提出了一种新的异步SGD方法,并将其命名为MindFlayer SGD。我们的理论及实证结果表明,在噪声呈重尾分布的情况下,MindFlayer SGD优于包括Rennala SGD在内的现有基线方法。