Adam has been shown to outperform gradient descent on large language models by a larger margin than on other tasks, but it is unclear why. We show that a key factor in this performance gap is the heavy-tailed class imbalance found in language tasks. When trained with gradient descent, the loss of infrequent words decreases more slowly than the loss of frequent ones. This leads to a slow decrease on the average loss as most samples come from infrequent words. On the other hand, Adam and sign-based methods are less sensitive to this problem. To establish that this behavior is caused by class imbalance, we show empirically that it can be reproduced across architectures and data types, on language transformers, vision CNNs, and linear models. On a linear model with cross-entropy loss, we show that class imbalance leads to imbalanced, correlated gradients and Hessians that have been hypothesized to benefit Adam. We also prove that, in continuous time, gradient descent converges slowly on low-frequency classes while sign descent does not.
翻译:已有研究表明,Adam在大型语言模型上优于梯度下降的幅度大于其他任务,但其原因尚不明确。本文指出,这种性能差距的一个关键因素是语言任务中存在的重尾类别不平衡现象。当使用梯度下降训练时,低频词的损失下降速度比高频词更慢。由于大多数样本来自低频词,这导致平均损失下降缓慢。相比之下,Adam及基于符号的方法对此问题较不敏感。为证实该行为由类别不平衡引起,我们通过实验证明,该现象可在不同架构与数据类型中复现,包括语言Transformer、视觉CNN以及线性模型。在线性模型与交叉熵损失的设定下,我们证明类别不平衡会导致梯度与海森矩阵的不平衡及相关性,而此前研究假设这种条件有利于Adam。此外,我们从理论上证明,在连续时间条件下,梯度下降在低频类别上收敛缓慢,而符号下降法则不受此影响。