Spectral gradient methods, such as the recently popularized Muon optimizer, are a promising alternative to standard Euclidean gradient descent for training deep neural networks and transformers, but it is still unclear in which regimes they are expected to perform better. We propose a simple layerwise condition that predicts when a spectral update yields a larger decrease in the loss than a Euclidean gradient step. This condition compares, for each parameter block, the squared nuclear-to-Frobenius ratio of the gradient to the stable rank of the incoming activations. To understand when this condition may be satisfied, we first prove that post-activation matrices have low stable rank at Gaussian initialization in random feature regression, feedforward networks, and transformer blocks. In spiked random feature models we then show that, after a short burn-in, the Euclidean gradient's nuclear-to-Frobenius ratio grows with the data dimension while the stable rank of the activations remains bounded, so the predicted advantage of spectral updates scales with dimension. We validate these predictions in synthetic regression experiments and in NanoGPT-scale language model training, where we find that intermediate activations have low-stable-rank throughout training and the corresponding gradients maintain large nuclear-to-Frobenius ratios. Together, these results identify conditions for spectral gradient methods, such as Muon, to be effective in training deep networks and transformers.
翻译:谱梯度方法(例如近期流行的Muon优化器)是训练深度神经网络和Transformer的一种有前景的替代方案,可替代标准的欧几里得梯度下降法,但目前尚不清楚在何种情况下谱梯度方法有望表现更优。我们提出一个简单的逐层条件,用于预测谱更新何时比欧几里得梯度步长能带来更大的损失下降。该条件针对每个参数块,比较梯度的平方核范数与Frobenius范数之比与输入激活矩阵的稳定秩。为理解该条件何时可能成立,我们首先证明在随机特征回归、前馈网络和Transformer模块中,高斯初始化后的激活后矩阵具有低稳定秩。随后在尖峰随机特征模型中,我们证明经过短暂预热期后,欧几里得梯度的核-Frobenius比随数据维度增长,而激活矩阵的稳定秩保持有界,因此谱更新的预测优势随维度缩放。我们在合成回归实验和NanoGPT规模的语言模型训练中验证了这些预测,发现中间激活矩阵在整个训练过程中保持低稳定秩,且对应梯度维持较大的核-Frobenius比。综上,这些结果明确了谱梯度方法(如Muon)在训练深度网络和Transformer中有效的条件。