Artificial intelligence has advanced rapidly through large neural networks trained on massive datasets using thousands of GPUs or TPUs. Such training can occupy entire data centers for weeks and requires enormous computational and energy resources. Yet the optimization algorithms behind these runs have not kept pace. Most large scale training still relies on synchronous methods, where workers must wait for the slowest device, wasting compute and amplifying the effects of hardware and network variability. Removing synchronization seems like a simple fix, but asynchrony introduces staleness, meaning updates computed on outdated models. This makes analysis difficult, especially when delays arise from system level randomness rather than algorithmic choices. As a result, the time complexity of asynchronous methods remains poorly understood. This dissertation develops a rigorous framework for asynchronous first order stochastic optimization, focusing on the core challenge of heterogeneous worker speeds. Within this framework, we show that with proper design, asynchronous SGD can achieve optimal time complexity, matching guarantees previously known only for synchronous methods. Our first contribution, Ringmaster ASGD, attains optimal time complexity in the homogeneous data setting by selectively discarding stale updates. The second, Ringleader ASGD, extends optimality to heterogeneous data, common in federated learning, using a structured gradient table mechanism. Finally, ATA improves resource efficiency by learning worker compute time distributions and allocating tasks adaptively, achieving near optimal wall clock time with less computation. Together, these results establish asynchronous optimization as a theoretically sound and practically efficient foundation for distributed learning, showing that coordination without synchronization can be both feasible and optimal.
翻译:人工智能通过使用数千个GPU或TPU在大型数据集上训练大规模神经网络而快速发展。此类训练可能占据整个数据中心数周时间,并需要巨大的计算和能源资源。然而支撑这些训练的优化算法尚未同步发展。大多数大规模训练仍依赖同步方法,即所有工作节点必须等待最慢的设备,这既浪费计算资源,又放大了硬件和网络可变性的影响。消除同步看似是简单的解决方案,但异步性会引入陈旧性问题,即基于过时模型计算的更新。这使得理论分析变得困难,尤其是当延迟源于系统层面的随机性而非算法设计选择时。因此,异步方法的时间复杂度特性至今尚未得到充分理解。本论文构建了异步一阶随机优化的严格理论框架,聚焦于异构工作节点速度这一核心挑战。在此框架内,我们证明通过合理设计,异步随机梯度下降法能够达到最优时间复杂度,匹配以往仅在同步方法中已知的理论保证。我们的第一个贡献——Ringmaster ASGD算法——通过选择性丢弃陈旧更新,在同质数据场景中实现了最优时间复杂度。第二个贡献——Ringleader ASGD算法——利用结构化梯度表机制,将最优性扩展到联邦学习中常见的异质数据场景。最后,ATA算法通过学习工作节点计算时间分布并自适应分配任务,以更少的计算量实现了接近最优的实际运行时间。这些成果共同确立了异步优化作为分布式学习的理论基础与实践框架,证明了无需同步的协调机制既具有可行性,也能达到理论最优。