Despite the widespread success of Transformers across various domains, their optimization guarantees in large-scale model settings are not well-understood. This paper rigorously analyzes the convergence properties of gradient flow in training Transformers with weight decay regularization. First, we construct the mean-field limit of large-scale Transformers, showing that as the model width and depth go to infinity, gradient flow converges to the Wasserstein gradient flow, which is represented by a partial differential equation. Then, we demonstrate that the gradient flow reaches a global minimum consistent with the PDE solution when the weight decay regularization parameter is sufficiently small. Our analysis is based on a series of novel mean-field techniques that adapt to Transformers. Compared with existing tools for deep networks (Lu et al., 2020) that demand homogeneity and global Lipschitz smoothness, we utilize a refined analysis assuming only $\textit{partial homogeneity}$ and $\textit{local Lipschitz smoothness}$. These new techniques may be of independent interest.
翻译:尽管Transformer模型在各个领域取得了广泛成功,但其在大规模模型设置下的优化保证尚未得到充分理解。本文严格分析了带权重衰减正则化的Transformer训练中梯度流的收敛特性。首先,我们构建了大规模Transformer的均值场极限,证明当模型宽度和深度趋于无穷时,梯度流收敛于Wasserstein梯度流,该过程可由偏微分方程表示。随后,我们证明当权重衰减正则化参数足够小时,梯度流将达到与偏微分方程解一致的全局最小值。我们的分析基于一系列适用于Transformer的新型均值场技术。相较于现有深度网络分析工具(Lu等人,2020)所要求的齐次性和全局Lipschitz光滑性假设,我们采用改进后的分析方法,仅需假设$\textit{部分齐次性}$与$\textit{局部Lipschitz光滑性}$。这些新技术可能具有独立的研究价值。